CRAN Package Check Results for Package mlr

Last updated on 2018-08-19 06:46:49 CEST.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 2.12.1 31.34 764.54 795.88 ERROR
r-devel-linux-x86_64-debian-gcc 2.12.1 26.32 563.74 590.06 ERROR
r-devel-linux-x86_64-fedora-clang 2.12.1 877.09 ERROR
r-devel-linux-x86_64-fedora-gcc 2.12.1 851.61 ERROR
r-devel-windows-ix86+x86_64 2.12.1 61.00 1519.00 1580.00 ERROR
r-patched-linux-x86_64 2.12.1 24.20 638.06 662.26 ERROR
r-patched-solaris-x86 2.12.1 1053.90 ERROR
r-release-linux-x86_64 2.12.1 25.50 670.37 695.87 ERROR
r-release-windows-ix86+x86_64 2.12.1 66.00 1489.00 1555.00 ERROR
r-release-osx-x86_64 2.12.1 OK
r-oldrel-windows-ix86+x86_64 2.12.1 46.00 1302.00 1348.00 ERROR
r-oldrel-osx-x86_64 2.12.1 ERROR

Additional issues

clang-UBSAN

Check Details

Version: 2.12.1
Check: package dependencies
Result: NOTE
    Packages suggested but not available for checking: ‘elmNN’ ‘lqa’
Flavors: r-devel-linux-x86_64-debian-clang, r-devel-linux-x86_64-debian-gcc, r-devel-linux-x86_64-fedora-clang, r-devel-linux-x86_64-fedora-gcc, r-patched-linux-x86_64, r-release-linux-x86_64

Version: 2.12.1
Check: for unstated dependencies in ‘tests’
Result: WARN
    '::' or ':::' imports not declared from:
     ‘lhs’ ‘plyr’
Flavors: r-devel-linux-x86_64-debian-clang, r-devel-linux-x86_64-debian-gcc, r-devel-linux-x86_64-fedora-clang, r-devel-linux-x86_64-fedora-gcc, r-devel-windows-ix86+x86_64

Version: 2.12.1
Check: tests
Result: ERROR
     Running ‘run-base.R’ [534s/483s]
    Running the tests in ‘tests/run-base.R’ failed.
    Complete output:
     > library(testthat)
     > test_check("mlr", filter = "base_")
     Loading required package: mlr
     Loading required package: ParamHelpers
    
     Attaching package: 'rex'
    
     The following object is masked from 'package:testthat':
    
     matches
    
    
     Attaching package: 'BBmisc'
    
     The following object is masked from 'package:base':
    
     isFALSE
    
     ── 1. Error: MulticlassWrapper (@test_base_MulticlassWrapper.R#23) ────────────
     For learner classif.lqa please install the following packages: lqa
     1: makeLearner("classif.lqa") at testthat/test_base_MulticlassWrapper.R:23
     2: do.call(constructor, list())
     3: (function ()
     {
     makeRLearnerClassif(cl = "classif.lqa", package = "lqa", par.set = makeParamSet(makeDiscreteLearnerParam(id = "penalty",
     values = c("adaptive.lasso", "ao", "bridge", "enet", "fused.lasso", "genet",
     "icb", "lasso", "licb", "oscar", "penalreg", "ridge", "scad", "weighted.fusion")),
     makeNumericLearnerParam(id = "lambda", lower = 0, requires = quote(penalty %in%
     c("adaptive.lasso", "ao", "bridge", "genet", "lasso", "oscar", "penalreg",
     "ridge", "scad"))), makeNumericLearnerParam(id = "gamma", lower = 1 +
     .Machine$double.eps, requires = quote(penalty %in% c("ao", "bridge",
     "genet", "weighted.fusion"))), makeNumericLearnerParam(id = "alpha",
     lower = 0, upper = 1, requires = quote(penalty == "genet")), makeNumericLearnerParam(id = "oscar.c",
     lower = 0, requires = quote(penalty == "oscar")), makeNumericLearnerParam(id = "a",
     lower = 2 + .Machine$double.eps, requires = quote(penalty == "scad")),
     makeNumericLearnerParam(id = "lambda1", lower = 0, requires = quote(penalty %in%
     c("enet", "fused.lasso", "icb", "licb", "weighted.fusion"))), makeNumericLearnerParam(id = "lambda2",
     lower = 0, requires = quote(penalty %in% c("enet", "fused.lasso", "icb",
     "licb", "weighted.fusion"))), makeDiscreteLearnerParam(id = "method",
     default = "lqa.update2", values = c("lqa.update2", "ForwardBoost", "GBlockBoost")),
     makeNumericLearnerParam(id = "var.eps", default = .Machine$double.eps, lower = 0),
     makeIntegerLearnerParam(id = "max.steps", lower = 1L, default = 5000L), makeNumericLearnerParam(id = "conv.eps",
     default = 0.001, lower = 0), makeLogicalLearnerParam(id = "conv.stop",
     default = TRUE), makeNumericLearnerParam(id = "c1", default = 1e-08,
     lower = 0), makeIntegerLearnerParam(id = "digits", default = 5L, lower = 1L)),
     properties = c("numerics", "prob", "twoclass"), par.vals = list(penalty = "lasso",
     lambda = 0.1), name = "Fitting penalized Generalized Linear Models with the LQA algorithm",
     short.name = "lqa", note = "`penalty` has been set to `\"lasso\"` and `lambda` to `0.1` by default. The parameters `lambda`, `gamma`, `alpha`, `oscar.c`, `a`, `lambda1` and `lambda2` are the tuning parameters of the `penalty` function being used, and correspond to the parameters as named in the respective help files. Parameter `c` for penalty method `oscar` has been named `oscar.c`. Parameters `lambda1` and `lambda2` correspond to the parameters named 'lambda_1' and 'lambda_2' of the penalty functions `enet`, `fused.lasso`, `icb`, `licb`, as well as `weighted.fusion`.",
     callees = c("lqa", "lqa.control", "adaptive.lasso", "ao", "bridge", "enet",
     "fused.lasso", "genet", "icb", "lasso", "licb", "oscar", "penalreg",
     "ridge", "scad", "weighted.fusion"))
     })()
     4: makeRLearnerClassif(cl = "classif.lqa", package = "lqa", par.set = makeParamSet(makeDiscreteLearnerParam(id = "penalty",
     values = c("adaptive.lasso", "ao", "bridge", "enet", "fused.lasso", "genet",
     "icb", "lasso", "licb", "oscar", "penalreg", "ridge", "scad", "weighted.fusion")),
     makeNumericLearnerParam(id = "lambda", lower = 0, requires = quote(penalty %in%
     c("adaptive.lasso", "ao", "bridge", "genet", "lasso", "oscar", "penalreg",
     "ridge", "scad"))), makeNumericLearnerParam(id = "gamma", lower = 1 +
     .Machine$double.eps, requires = quote(penalty %in% c("ao", "bridge", "genet",
     "weighted.fusion"))), makeNumericLearnerParam(id = "alpha", lower = 0, upper = 1,
     requires = quote(penalty == "genet")), makeNumericLearnerParam(id = "oscar.c",
     lower = 0, requires = quote(penalty == "oscar")), makeNumericLearnerParam(id = "a",
     lower = 2 + .Machine$double.eps, requires = quote(penalty == "scad")), makeNumericLearnerParam(id = "lambda1",
     lower = 0, requires = quote(penalty %in% c("enet", "fused.lasso", "icb",
     "licb", "weighted.fusion"))), makeNumericLearnerParam(id = "lambda2",
     lower = 0, requires = quote(penalty %in% c("enet", "fused.lasso", "icb",
     "licb", "weighted.fusion"))), makeDiscreteLearnerParam(id = "method",
     default = "lqa.update2", values = c("lqa.update2", "ForwardBoost", "GBlockBoost")),
     makeNumericLearnerParam(id = "var.eps", default = .Machine$double.eps, lower = 0),
     makeIntegerLearnerParam(id = "max.steps", lower = 1L, default = 5000L), makeNumericLearnerParam(id = "conv.eps",
     default = 0.001, lower = 0), makeLogicalLearnerParam(id = "conv.stop", default = TRUE),
     makeNumericLearnerParam(id = "c1", default = 1e-08, lower = 0), makeIntegerLearnerParam(id = "digits",
     default = 5L, lower = 1L)), properties = c("numerics", "prob", "twoclass"),
     par.vals = list(penalty = "lasso", lambda = 0.1), name = "Fitting penalized Generalized Linear Models with the LQA algorithm",
     short.name = "lqa", note = "`penalty` has been set to `\"lasso\"` and `lambda` to `0.1` by default. The parameters `lambda`, `gamma`, `alpha`, `oscar.c`, `a`, `lambda1` and `lambda2` are the tuning parameters of the `penalty` function being used, and correspond to the parameters as named in the respective help files. Parameter `c` for penalty method `oscar` has been named `oscar.c`. Parameters `lambda1` and `lambda2` correspond to the parameters named 'lambda_1' and 'lambda_2' of the penalty functions `enet`, `fused.lasso`, `icb`, `licb`, as well as `weighted.fusion`.",
     callees = c("lqa", "lqa.control", "adaptive.lasso", "ao", "bridge", "enet", "fused.lasso",
     "genet", "icb", "lasso", "licb", "oscar", "penalreg", "ridge", "scad", "weighted.fusion"))
     5: addClasses(makeRLearnerInternal(cl, "classif", package, par.set, par.vals, properties,
     name, short.name, note, callees), c(cl, "RLearnerClassif"))
     6: makeRLearnerInternal(cl, "classif", package, par.set, par.vals, properties, name,
     short.name, note, callees)
     7: requirePackages(package, why = stri_paste("learner", id, sep = " "), default.method = "load")
     8: stopf("For %s please install the following packages: %s", why, ps)
    
     ── 2. Error: tuning allows usage of budget (@test_base_tuning.R#120) ──────────
     Assertion on 'discrete.names' failed: Must be of type 'logical flag', not 'NULL'.
     1: tuneParams(lrn, binaryclass.task, resampling = rdesc, par.set = ps, control = ctrl) at testthat/test_base_tuning.R:120
     2: sel.func(learner, task, resampling, measures, par.set, control, opt.path, show.info,
     resample.fun)
     3: sampleValue(par.set, start, trafo = FALSE)
     4: sampleValue.ParamSet(par.set, start, trafo = FALSE)
     5: lapply(par$pars, sampleValue, discrete.names = discrete.names, trafo = trafo)
     6: FUN(X[[i]], ...)
     7: sampleValue.Param(X[[i]], ...)
     8: assertFlag(discrete.names)
     9: makeAssertion(x, res, .var.name, add)
     10: mstop("Assertion on '%s' failed: %s.", var.name, res)
    
     ══ testthat results ═══════════════════════════════════════════════════════════
     OK: 3578 SKIPPED: 0 FAILED: 2
     1. Error: MulticlassWrapper (@test_base_MulticlassWrapper.R#23)
     2. Error: tuning allows usage of budget (@test_base_tuning.R#120)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-devel-linux-x86_64-debian-clang

Version: 2.12.1
Check: tests
Result: ERROR
     Running ‘run-base.R’ [384s/488s]
    Running the tests in ‘tests/run-base.R’ failed.
    Complete output:
     > library(testthat)
     > test_check("mlr", filter = "base_")
     Loading required package: mlr
     Loading required package: ParamHelpers
    
     Attaching package: 'rex'
    
     The following object is masked from 'package:testthat':
    
     matches
    
    
     Attaching package: 'BBmisc'
    
     The following object is masked from 'package:base':
    
     isFALSE
    
     ── 1. Error: MulticlassWrapper (@test_base_MulticlassWrapper.R#23) ────────────
     For learner classif.lqa please install the following packages: lqa
     1: makeLearner("classif.lqa") at testthat/test_base_MulticlassWrapper.R:23
     2: do.call(constructor, list())
     3: (function ()
     {
     makeRLearnerClassif(cl = "classif.lqa", package = "lqa", par.set = makeParamSet(makeDiscreteLearnerParam(id = "penalty",
     values = c("adaptive.lasso", "ao", "bridge", "enet", "fused.lasso", "genet",
     "icb", "lasso", "licb", "oscar", "penalreg", "ridge", "scad", "weighted.fusion")),
     makeNumericLearnerParam(id = "lambda", lower = 0, requires = quote(penalty %in%
     c("adaptive.lasso", "ao", "bridge", "genet", "lasso", "oscar", "penalreg",
     "ridge", "scad"))), makeNumericLearnerParam(id = "gamma", lower = 1 +
     .Machine$double.eps, requires = quote(penalty %in% c("ao", "bridge",
     "genet", "weighted.fusion"))), makeNumericLearnerParam(id = "alpha",
     lower = 0, upper = 1, requires = quote(penalty == "genet")), makeNumericLearnerParam(id = "oscar.c",
     lower = 0, requires = quote(penalty == "oscar")), makeNumericLearnerParam(id = "a",
     lower = 2 + .Machine$double.eps, requires = quote(penalty == "scad")),
     makeNumericLearnerParam(id = "lambda1", lower = 0, requires = quote(penalty %in%
     c("enet", "fused.lasso", "icb", "licb", "weighted.fusion"))), makeNumericLearnerParam(id = "lambda2",
     lower = 0, requires = quote(penalty %in% c("enet", "fused.lasso", "icb",
     "licb", "weighted.fusion"))), makeDiscreteLearnerParam(id = "method",
     default = "lqa.update2", values = c("lqa.update2", "ForwardBoost", "GBlockBoost")),
     makeNumericLearnerParam(id = "var.eps", default = .Machine$double.eps, lower = 0),
     makeIntegerLearnerParam(id = "max.steps", lower = 1L, default = 5000L), makeNumericLearnerParam(id = "conv.eps",
     default = 0.001, lower = 0), makeLogicalLearnerParam(id = "conv.stop",
     default = TRUE), makeNumericLearnerParam(id = "c1", default = 1e-08,
     lower = 0), makeIntegerLearnerParam(id = "digits", default = 5L, lower = 1L)),
     properties = c("numerics", "prob", "twoclass"), par.vals = list(penalty = "lasso",
     lambda = 0.1), name = "Fitting penalized Generalized Linear Models with the LQA algorithm",
     short.name = "lqa", note = "`penalty` has been set to `\"lasso\"` and `lambda` to `0.1` by default. The parameters `lambda`, `gamma`, `alpha`, `oscar.c`, `a`, `lambda1` and `lambda2` are the tuning parameters of the `penalty` function being used, and correspond to the parameters as named in the respective help files. Parameter `c` for penalty method `oscar` has been named `oscar.c`. Parameters `lambda1` and `lambda2` correspond to the parameters named 'lambda_1' and 'lambda_2' of the penalty functions `enet`, `fused.lasso`, `icb`, `licb`, as well as `weighted.fusion`.",
     callees = c("lqa", "lqa.control", "adaptive.lasso", "ao", "bridge", "enet",
     "fused.lasso", "genet", "icb", "lasso", "licb", "oscar", "penalreg",
     "ridge", "scad", "weighted.fusion"))
     })()
     4: makeRLearnerClassif(cl = "classif.lqa", package = "lqa", par.set = makeParamSet(makeDiscreteLearnerParam(id = "penalty",
     values = c("adaptive.lasso", "ao", "bridge", "enet", "fused.lasso", "genet",
     "icb", "lasso", "licb", "oscar", "penalreg", "ridge", "scad", "weighted.fusion")),
     makeNumericLearnerParam(id = "lambda", lower = 0, requires = quote(penalty %in%
     c("adaptive.lasso", "ao", "bridge", "genet", "lasso", "oscar", "penalreg",
     "ridge", "scad"))), makeNumericLearnerParam(id = "gamma", lower = 1 +
     .Machine$double.eps, requires = quote(penalty %in% c("ao", "bridge", "genet",
     "weighted.fusion"))), makeNumericLearnerParam(id = "alpha", lower = 0, upper = 1,
     requires = quote(penalty == "genet")), makeNumericLearnerParam(id = "oscar.c",
     lower = 0, requires = quote(penalty == "oscar")), makeNumericLearnerParam(id = "a",
     lower = 2 + .Machine$double.eps, requires = quote(penalty == "scad")), makeNumericLearnerParam(id = "lambda1",
     lower = 0, requires = quote(penalty %in% c("enet", "fused.lasso", "icb",
     "licb", "weighted.fusion"))), makeNumericLearnerParam(id = "lambda2",
     lower = 0, requires = quote(penalty %in% c("enet", "fused.lasso", "icb",
     "licb", "weighted.fusion"))), makeDiscreteLearnerParam(id = "method",
     default = "lqa.update2", values = c("lqa.update2", "ForwardBoost", "GBlockBoost")),
     makeNumericLearnerParam(id = "var.eps", default = .Machine$double.eps, lower = 0),
     makeIntegerLearnerParam(id = "max.steps", lower = 1L, default = 5000L), makeNumericLearnerParam(id = "conv.eps",
     default = 0.001, lower = 0), makeLogicalLearnerParam(id = "conv.stop", default = TRUE),
     makeNumericLearnerParam(id = "c1", default = 1e-08, lower = 0), makeIntegerLearnerParam(id = "digits",
     default = 5L, lower = 1L)), properties = c("numerics", "prob", "twoclass"),
     par.vals = list(penalty = "lasso", lambda = 0.1), name = "Fitting penalized Generalized Linear Models with the LQA algorithm",
     short.name = "lqa", note = "`penalty` has been set to `\"lasso\"` and `lambda` to `0.1` by default. The parameters `lambda`, `gamma`, `alpha`, `oscar.c`, `a`, `lambda1` and `lambda2` are the tuning parameters of the `penalty` function being used, and correspond to the parameters as named in the respective help files. Parameter `c` for penalty method `oscar` has been named `oscar.c`. Parameters `lambda1` and `lambda2` correspond to the parameters named 'lambda_1' and 'lambda_2' of the penalty functions `enet`, `fused.lasso`, `icb`, `licb`, as well as `weighted.fusion`.",
     callees = c("lqa", "lqa.control", "adaptive.lasso", "ao", "bridge", "enet", "fused.lasso",
     "genet", "icb", "lasso", "licb", "oscar", "penalreg", "ridge", "scad", "weighted.fusion"))
     5: addClasses(makeRLearnerInternal(cl, "classif", package, par.set, par.vals, properties,
     name, short.name, note, callees), c(cl, "RLearnerClassif"))
     6: makeRLearnerInternal(cl, "classif", package, par.set, par.vals, properties, name,
     short.name, note, callees)
     7: requirePackages(package, why = stri_paste("learner", id, sep = " "), default.method = "load")
     8: stopf("For %s please install the following packages: %s", why, ps)
    
     ── 2. Error: tuning allows usage of budget (@test_base_tuning.R#120) ──────────
     Assertion on 'discrete.names' failed: Must be of type 'logical flag', not 'NULL'.
     1: tuneParams(lrn, binaryclass.task, resampling = rdesc, par.set = ps, control = ctrl) at testthat/test_base_tuning.R:120
     2: sel.func(learner, task, resampling, measures, par.set, control, opt.path, show.info,
     resample.fun)
     3: sampleValue(par.set, start, trafo = FALSE)
     4: sampleValue.ParamSet(par.set, start, trafo = FALSE)
     5: lapply(par$pars, sampleValue, discrete.names = discrete.names, trafo = trafo)
     6: FUN(X[[i]], ...)
     7: sampleValue.Param(X[[i]], ...)
     8: assertFlag(discrete.names)
     9: makeAssertion(x, res, .var.name, add)
     10: mstop("Assertion on '%s' failed: %s.", var.name, res)
    
     ══ testthat results ═══════════════════════════════════════════════════════════
     OK: 3578 SKIPPED: 0 FAILED: 2
     1. Error: MulticlassWrapper (@test_base_MulticlassWrapper.R#23)
     2. Error: tuning allows usage of budget (@test_base_tuning.R#120)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-devel-linux-x86_64-debian-gcc

Version: 2.12.1
Check: tests
Result: ERROR
     Running ‘run-base.R’ [9m/11m]
     Running ‘run-basenocran.R’
     Running ‘run-classif1.R’
     Running ‘run-classif2.R’
     Running ‘run-cluster.R’
     Running ‘run-featsel.R’
     Running ‘run-learners-classif.R’
     Running ‘run-learners-classiflabelswitch.R’
     Running ‘run-learners-cluster.R’
     Running ‘run-learners-general.R’
     Running ‘run-learners-multilabel.R’
     Running ‘run-learners-regr.R’
     Running ‘run-learners-surv.R’
     Running ‘run-lint.R’
     Running ‘run-multilabel.R’
     Running ‘run-parallel.R’
     Running ‘run-regr.R’
     Running ‘run-stack.R’
     Running ‘run-surv.R’
     Running ‘run-tune.R’
    Running the tests in ‘tests/run-base.R’ failed.
    Complete output:
     > library(testthat)
     > test_check("mlr", filter = "base_")
     Loading required package: mlr
     Loading required package: ParamHelpers
    
     Attaching package: 'rex'
    
     The following object is masked from 'package:testthat':
    
     matches
    
    
     Attaching package: 'BBmisc'
    
     The following object is masked from 'package:base':
    
     isFALSE
    
     ── 1. Error: MulticlassWrapper (@test_base_MulticlassWrapper.R#23) ────────────
     For learner classif.lqa please install the following packages: lqa
     1: makeLearner("classif.lqa") at testthat/test_base_MulticlassWrapper.R:23
     2: do.call(constructor, list())
     3: (function ()
     {
     makeRLearnerClassif(cl = "classif.lqa", package = "lqa", par.set = makeParamSet(makeDiscreteLearnerParam(id = "penalty",
     values = c("adaptive.lasso", "ao", "bridge", "enet", "fused.lasso", "genet",
     "icb", "lasso", "licb", "oscar", "penalreg", "ridge", "scad", "weighted.fusion")),
     makeNumericLearnerParam(id = "lambda", lower = 0, requires = quote(penalty %in%
     c("adaptive.lasso", "ao", "bridge", "genet", "lasso", "oscar", "penalreg",
     "ridge", "scad"))), makeNumericLearnerParam(id = "gamma", lower = 1 +
     .Machine$double.eps, requires = quote(penalty %in% c("ao", "bridge",
     "genet", "weighted.fusion"))), makeNumericLearnerParam(id = "alpha",
     lower = 0, upper = 1, requires = quote(penalty == "genet")), makeNumericLearnerParam(id = "oscar.c",
     lower = 0, requires = quote(penalty == "oscar")), makeNumericLearnerParam(id = "a",
     lower = 2 + .Machine$double.eps, requires = quote(penalty == "scad")),
     makeNumericLearnerParam(id = "lambda1", lower = 0, requires = quote(penalty %in%
     c("enet", "fused.lasso", "icb", "licb", "weighted.fusion"))), makeNumericLearnerParam(id = "lambda2",
     lower = 0, requires = quote(penalty %in% c("enet", "fused.lasso", "icb",
     "licb", "weighted.fusion"))), makeDiscreteLearnerParam(id = "method",
     default = "lqa.update2", values = c("lqa.update2", "ForwardBoost", "GBlockBoost")),
     makeNumericLearnerParam(id = "var.eps", default = .Machine$double.eps, lower = 0),
     makeIntegerLearnerParam(id = "max.steps", lower = 1L, default = 5000L), makeNumericLearnerParam(id = "conv.eps",
     default = 0.001, lower = 0), makeLogicalLearnerParam(id = "conv.stop",
     default = TRUE), makeNumericLearnerParam(id = "c1", default = 1e-08,
     lower = 0), makeIntegerLearnerParam(id = "digits", default = 5L, lower = 1L)),
     properties = c("numerics", "prob", "twoclass"), par.vals = list(penalty = "lasso",
     lambda = 0.1), name = "Fitting penalized Generalized Linear Models with the LQA algorithm",
     short.name = "lqa", note = "`penalty` has been set to `\"lasso\"` and `lambda` to `0.1` by default. The parameters `lambda`, `gamma`, `alpha`, `oscar.c`, `a`, `lambda1` and `lambda2` are the tuning parameters of the `penalty` function being used, and correspond to the parameters as named in the respective help files. Parameter `c` for penalty method `oscar` has been named `oscar.c`. Parameters `lambda1` and `lambda2` correspond to the parameters named 'lambda_1' and 'lambda_2' of the penalty functions `enet`, `fused.lasso`, `icb`, `licb`, as well as `weighted.fusion`.",
     callees = c("lqa", "lqa.control", "adaptive.lasso", "ao", "bridge", "enet",
     "fused.lasso", "genet", "icb", "lasso", "licb", "oscar", "penalreg",
     "ridge", "scad", "weighted.fusion"))
     })()
     4: makeRLearnerClassif(cl = "classif.lqa", package = "lqa", par.set = makeParamSet(makeDiscreteLearnerParam(id = "penalty",
     values = c("adaptive.lasso", "ao", "bridge", "enet", "fused.lasso", "genet",
     "icb", "lasso", "licb", "oscar", "penalreg", "ridge", "scad", "weighted.fusion")),
     makeNumericLearnerParam(id = "lambda", lower = 0, requires = quote(penalty %in%
     c("adaptive.lasso", "ao", "bridge", "genet", "lasso", "oscar", "penalreg",
     "ridge", "scad"))), makeNumericLearnerParam(id = "gamma", lower = 1 +
     .Machine$double.eps, requires = quote(penalty %in% c("ao", "bridge", "genet",
     "weighted.fusion"))), makeNumericLearnerParam(id = "alpha", lower = 0, upper = 1,
     requires = quote(penalty == "genet")), makeNumericLearnerParam(id = "oscar.c",
     lower = 0, requires = quote(penalty == "oscar")), makeNumericLearnerParam(id = "a",
     lower = 2 + .Machine$double.eps, requires = quote(penalty == "scad")), makeNumericLearnerParam(id = "lambda1",
     lower = 0, requires = quote(penalty %in% c("enet", "fused.lasso", "icb",
     "licb", "weighted.fusion"))), makeNumericLearnerParam(id = "lambda2",
     lower = 0, requires = quote(penalty %in% c("enet", "fused.lasso", "icb",
     "licb", "weighted.fusion"))), makeDiscreteLearnerParam(id = "method",
     default = "lqa.update2", values = c("lqa.update2", "ForwardBoost", "GBlockBoost")),
     makeNumericLearnerParam(id = "var.eps", default = .Machine$double.eps, lower = 0),
     makeIntegerLearnerParam(id = "max.steps", lower = 1L, default = 5000L), makeNumericLearnerParam(id = "conv.eps",
     default = 0.001, lower = 0), makeLogicalLearnerParam(id = "conv.stop", default = TRUE),
     makeNumericLearnerParam(id = "c1", default = 1e-08, lower = 0), makeIntegerLearnerParam(id = "digits",
     default = 5L, lower = 1L)), properties = c("numerics", "prob", "twoclass"),
     par.vals = list(penalty = "lasso", lambda = 0.1), name = "Fitting penalized Generalized Linear Models with the LQA algorithm",
     short.name = "lqa", note = "`penalty` has been set to `\"lasso\"` and `lambda` to `0.1` by default. The parameters `lambda`, `gamma`, `alpha`, `oscar.c`, `a`, `lambda1` and `lambda2` are the tuning parameters of the `penalty` function being used, and correspond to the parameters as named in the respective help files. Parameter `c` for penalty method `oscar` has been named `oscar.c`. Parameters `lambda1` and `lambda2` correspond to the parameters named 'lambda_1' and 'lambda_2' of the penalty functions `enet`, `fused.lasso`, `icb`, `licb`, as well as `weighted.fusion`.",
     callees = c("lqa", "lqa.control", "adaptive.lasso", "ao", "bridge", "enet", "fused.lasso",
     "genet", "icb", "lasso", "licb", "oscar", "penalreg", "ridge", "scad", "weighted.fusion"))
     5: addClasses(makeRLearnerInternal(cl, "classif", package, par.set, par.vals, properties,
     name, short.name, note, callees), c(cl, "RLearnerClassif"))
     6: makeRLearnerInternal(cl, "classif", package, par.set, par.vals, properties, name,
     short.name, note, callees)
     7: requirePackages(package, why = stri_paste("learner", id, sep = " "), default.method = "load")
     8: stopf("For %s please install the following packages: %s", why, ps)
    
     OMP: Warning #96: Cannot form a team with 6 threads, using 2 instead.
     OMP: Hint Consider unsetting KMP_DEVICE_THREAD_LIMIT (KMP_ALL_THREADS), KMP_TEAMS_THREAD_LIMIT, and OMP_THREAD_LIMIT (if any are set).
     ── 2. Error: tuning allows usage of budget (@test_base_tuning.R#120) ──────────
     Assertion on 'discrete.names' failed: Must be of type 'logical flag', not 'NULL'.
     1: tuneParams(lrn, binaryclass.task, resampling = rdesc, par.set = ps, control = ctrl) at testthat/test_base_tuning.R:120
     2: sel.func(learner, task, resampling, measures, par.set, control, opt.path, show.info,
     resample.fun)
     3: sampleValue(par.set, start, trafo = FALSE)
     4: sampleValue.ParamSet(par.set, start, trafo = FALSE)
     5: lapply(par$pars, sampleValue, discrete.names = discrete.names, trafo = trafo)
     6: FUN(X[[i]], ...)
     7: sampleValue.Param(X[[i]], ...)
     8: assertFlag(discrete.names)
     9: makeAssertion(x, res, .var.name, add)
     10: mstop("Assertion on '%s' failed: %s.", var.name, res)
    
     ══ testthat results ═══════════════════════════════════════════════════════════
     OK: 3578 SKIPPED: 0 FAILED: 2
     1. Error: MulticlassWrapper (@test_base_MulticlassWrapper.R#23)
     2. Error: tuning allows usage of budget (@test_base_tuning.R#120)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-devel-linux-x86_64-fedora-clang

Version: 2.12.1
Check: tests
Result: ERROR
     Running ‘run-base.R’ [535s/600s]
     Running ‘run-basenocran.R’
     Running ‘run-classif1.R’
     Running ‘run-classif2.R’
     Running ‘run-cluster.R’
     Running ‘run-featsel.R’
     Running ‘run-learners-classif.R’
     Running ‘run-learners-classiflabelswitch.R’
     Running ‘run-learners-cluster.R’
     Running ‘run-learners-general.R’
     Running ‘run-learners-multilabel.R’
     Running ‘run-learners-regr.R’
     Running ‘run-learners-surv.R’
     Running ‘run-lint.R’
     Running ‘run-multilabel.R’
     Running ‘run-parallel.R’
     Running ‘run-regr.R’
     Running ‘run-stack.R’
     Running ‘run-surv.R’
     Running ‘run-tune.R’
    Running the tests in ‘tests/run-base.R’ failed.
    Complete output:
     > library(testthat)
     > test_check("mlr", filter = "base_")
     Loading required package: mlr
     Loading required package: ParamHelpers
    
     Attaching package: 'rex'
    
     The following object is masked from 'package:testthat':
    
     matches
    
    
     Attaching package: 'BBmisc'
    
     The following object is masked from 'package:base':
    
     isFALSE
    
     ── 1. Error: MulticlassWrapper (@test_base_MulticlassWrapper.R#23) ────────────
     For learner classif.lqa please install the following packages: lqa
     1: makeLearner("classif.lqa") at testthat/test_base_MulticlassWrapper.R:23
     2: do.call(constructor, list())
     3: (function ()
     {
     makeRLearnerClassif(cl = "classif.lqa", package = "lqa", par.set = makeParamSet(makeDiscreteLearnerParam(id = "penalty",
     values = c("adaptive.lasso", "ao", "bridge", "enet", "fused.lasso", "genet",
     "icb", "lasso", "licb", "oscar", "penalreg", "ridge", "scad", "weighted.fusion")),
     makeNumericLearnerParam(id = "lambda", lower = 0, requires = quote(penalty %in%
     c("adaptive.lasso", "ao", "bridge", "genet", "lasso", "oscar", "penalreg",
     "ridge", "scad"))), makeNumericLearnerParam(id = "gamma", lower = 1 +
     .Machine$double.eps, requires = quote(penalty %in% c("ao", "bridge",
     "genet", "weighted.fusion"))), makeNumericLearnerParam(id = "alpha",
     lower = 0, upper = 1, requires = quote(penalty == "genet")), makeNumericLearnerParam(id = "oscar.c",
     lower = 0, requires = quote(penalty == "oscar")), makeNumericLearnerParam(id = "a",
     lower = 2 + .Machine$double.eps, requires = quote(penalty == "scad")),
     makeNumericLearnerParam(id = "lambda1", lower = 0, requires = quote(penalty %in%
     c("enet", "fused.lasso", "icb", "licb", "weighted.fusion"))), makeNumericLearnerParam(id = "lambda2",
     lower = 0, requires = quote(penalty %in% c("enet", "fused.lasso", "icb",
     "licb", "weighted.fusion"))), makeDiscreteLearnerParam(id = "method",
     default = "lqa.update2", values = c("lqa.update2", "ForwardBoost", "GBlockBoost")),
     makeNumericLearnerParam(id = "var.eps", default = .Machine$double.eps, lower = 0),
     makeIntegerLearnerParam(id = "max.steps", lower = 1L, default = 5000L), makeNumericLearnerParam(id = "conv.eps",
     default = 0.001, lower = 0), makeLogicalLearnerParam(id = "conv.stop",
     default = TRUE), makeNumericLearnerParam(id = "c1", default = 1e-08,
     lower = 0), makeIntegerLearnerParam(id = "digits", default = 5L, lower = 1L)),
     properties = c("numerics", "prob", "twoclass"), par.vals = list(penalty = "lasso",
     lambda = 0.1), name = "Fitting penalized Generalized Linear Models with the LQA algorithm",
     short.name = "lqa", note = "`penalty` has been set to `\"lasso\"` and `lambda` to `0.1` by default. The parameters `lambda`, `gamma`, `alpha`, `oscar.c`, `a`, `lambda1` and `lambda2` are the tuning parameters of the `penalty` function being used, and correspond to the parameters as named in the respective help files. Parameter `c` for penalty method `oscar` has been named `oscar.c`. Parameters `lambda1` and `lambda2` correspond to the parameters named 'lambda_1' and 'lambda_2' of the penalty functions `enet`, `fused.lasso`, `icb`, `licb`, as well as `weighted.fusion`.",
     callees = c("lqa", "lqa.control", "adaptive.lasso", "ao", "bridge", "enet",
     "fused.lasso", "genet", "icb", "lasso", "licb", "oscar", "penalreg",
     "ridge", "scad", "weighted.fusion"))
     })()
     4: makeRLearnerClassif(cl = "classif.lqa", package = "lqa", par.set = makeParamSet(makeDiscreteLearnerParam(id = "penalty",
     values = c("adaptive.lasso", "ao", "bridge", "enet", "fused.lasso", "genet",
     "icb", "lasso", "licb", "oscar", "penalreg", "ridge", "scad", "weighted.fusion")),
     makeNumericLearnerParam(id = "lambda", lower = 0, requires = quote(penalty %in%
     c("adaptive.lasso", "ao", "bridge", "genet", "lasso", "oscar", "penalreg",
     "ridge", "scad"))), makeNumericLearnerParam(id = "gamma", lower = 1 +
     .Machine$double.eps, requires = quote(penalty %in% c("ao", "bridge", "genet",
     "weighted.fusion"))), makeNumericLearnerParam(id = "alpha", lower = 0, upper = 1,
     requires = quote(penalty == "genet")), makeNumericLearnerParam(id = "oscar.c",
     lower = 0, requires = quote(penalty == "oscar")), makeNumericLearnerParam(id = "a",
     lower = 2 + .Machine$double.eps, requires = quote(penalty == "scad")), makeNumericLearnerParam(id = "lambda1",
     lower = 0, requires = quote(penalty %in% c("enet", "fused.lasso", "icb",
     "licb", "weighted.fusion"))), makeNumericLearnerParam(id = "lambda2",
     lower = 0, requires = quote(penalty %in% c("enet", "fused.lasso", "icb",
     "licb", "weighted.fusion"))), makeDiscreteLearnerParam(id = "method",
     default = "lqa.update2", values = c("lqa.update2", "ForwardBoost", "GBlockBoost")),
     makeNumericLearnerParam(id = "var.eps", default = .Machine$double.eps, lower = 0),
     makeIntegerLearnerParam(id = "max.steps", lower = 1L, default = 5000L), makeNumericLearnerParam(id = "conv.eps",
     default = 0.001, lower = 0), makeLogicalLearnerParam(id = "conv.stop", default = TRUE),
     makeNumericLearnerParam(id = "c1", default = 1e-08, lower = 0), makeIntegerLearnerParam(id = "digits",
     default = 5L, lower = 1L)), properties = c("numerics", "prob", "twoclass"),
     par.vals = list(penalty = "lasso", lambda = 0.1), name = "Fitting penalized Generalized Linear Models with the LQA algorithm",
     short.name = "lqa", note = "`penalty` has been set to `\"lasso\"` and `lambda` to `0.1` by default. The parameters `lambda`, `gamma`, `alpha`, `oscar.c`, `a`, `lambda1` and `lambda2` are the tuning parameters of the `penalty` function being used, and correspond to the parameters as named in the respective help files. Parameter `c` for penalty method `oscar` has been named `oscar.c`. Parameters `lambda1` and `lambda2` correspond to the parameters named 'lambda_1' and 'lambda_2' of the penalty functions `enet`, `fused.lasso`, `icb`, `licb`, as well as `weighted.fusion`.",
     callees = c("lqa", "lqa.control", "adaptive.lasso", "ao", "bridge", "enet", "fused.lasso",
     "genet", "icb", "lasso", "licb", "oscar", "penalreg", "ridge", "scad", "weighted.fusion"))
     5: addClasses(makeRLearnerInternal(cl, "classif", package, par.set, par.vals, properties,
     name, short.name, note, callees), c(cl, "RLearnerClassif"))
     6: makeRLearnerInternal(cl, "classif", package, par.set, par.vals, properties, name,
     short.name, note, callees)
     7: requirePackages(package, why = stri_paste("learner", id, sep = " "), default.method = "load")
     8: stopf("For %s please install the following packages: %s", why, ps)
    
     ── 2. Error: tuning allows usage of budget (@test_base_tuning.R#120) ──────────
     Assertion on 'discrete.names' failed: Must be of type 'logical flag', not 'NULL'.
     1: tuneParams(lrn, binaryclass.task, resampling = rdesc, par.set = ps, control = ctrl) at testthat/test_base_tuning.R:120
     2: sel.func(learner, task, resampling, measures, par.set, control, opt.path, show.info,
     resample.fun)
     3: sampleValue(par.set, start, trafo = FALSE)
     4: sampleValue.ParamSet(par.set, start, trafo = FALSE)
     5: lapply(par$pars, sampleValue, discrete.names = discrete.names, trafo = trafo)
     6: FUN(X[[i]], ...)
     7: sampleValue.Param(X[[i]], ...)
     8: assertFlag(discrete.names)
     9: makeAssertion(x, res, .var.name, add)
     10: mstop("Assertion on '%s' failed: %s.", var.name, res)
    
     ══ testthat results ═══════════════════════════════════════════════════════════
     OK: 3578 SKIPPED: 0 FAILED: 2
     1. Error: MulticlassWrapper (@test_base_MulticlassWrapper.R#23)
     2. Error: tuning allows usage of budget (@test_base_tuning.R#120)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-devel-linux-x86_64-fedora-gcc

Version: 2.12.1
Check: running tests for arch ‘i386’
Result: ERROR
     Running 'run-base.R' [570s]
     Running 'run-basenocran.R' [1s]
     Running 'run-classif1.R' [0s]
     Running 'run-classif2.R' [0s]
     Running 'run-cluster.R' [1s]
     Running 'run-featsel.R' [1s]
     Running 'run-learners-classif.R' [0s]
     Running 'run-learners-classiflabelswitch.R' [1s]
     Running 'run-learners-cluster.R' [0s]
     Running 'run-learners-general.R' [0s]
     Running 'run-learners-multilabel.R' [0s]
     Running 'run-learners-regr.R' [0s]
     Running 'run-learners-surv.R' [0s]
     Running 'run-lint.R' [5s]
     Running 'run-multilabel.R' [1s]
     Running 'run-parallel.R' [1s]
     Running 'run-regr.R' [1s]
     Running 'run-stack.R' [1s]
     Running 'run-surv.R' [1s]
     Running 'run-tune.R' [1s]
    Running the tests in 'tests/run-base.R' failed.
    Complete output:
     > library(testthat)
     > test_check("mlr", filter = "base_")
     Loading required package: mlr
     Loading required package: ParamHelpers
    
     Attaching package: 'rex'
    
     The following object is masked from 'package:testthat':
    
     matches
    
    
     Attaching package: 'BBmisc'
    
     The following object is masked from 'package:base':
    
     isFALSE
    
     -- 1. Error: tuning allows usage of budget (@test_base_tuning.R#120) ----------
     Assertion on 'discrete.names' failed: Must be of type 'logical flag', not 'NULL'.
     1: tuneParams(lrn, binaryclass.task, resampling = rdesc, par.set = ps, control = ctrl) at testthat/test_base_tuning.R:120
     2: sel.func(learner, task, resampling, measures, par.set, control, opt.path, show.info,
     resample.fun)
     3: sampleValue(par.set, start, trafo = FALSE)
     4: sampleValue.ParamSet(par.set, start, trafo = FALSE)
     5: lapply(par$pars, sampleValue, discrete.names = discrete.names, trafo = trafo)
     6: FUN(X[[i]], ...)
     7: sampleValue.Param(X[[i]], ...)
     8: assertFlag(discrete.names)
     9: makeAssertion(x, res, .var.name, add)
     10: mstop("Assertion on '%s' failed: %s.", var.name, res)
    
     == testthat results ===========================================================
     OK: 3581 SKIPPED: 0 FAILED: 1
     1. Error: tuning allows usage of budget (@test_base_tuning.R#120)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-devel-windows-ix86+x86_64

Version: 2.12.1
Check: running tests for arch ‘x64’
Result: ERROR
     Running 'run-base.R' [561s]
     Running 'run-basenocran.R' [1s]
     Running 'run-classif1.R' [1s]
     Running 'run-classif2.R' [1s]
     Running 'run-cluster.R' [1s]
     Running 'run-featsel.R' [0s]
     Running 'run-learners-classif.R' [1s]
     Running 'run-learners-classiflabelswitch.R' [1s]
     Running 'run-learners-cluster.R' [1s]
     Running 'run-learners-general.R' [1s]
     Running 'run-learners-multilabel.R' [1s]
     Running 'run-learners-regr.R' [1s]
     Running 'run-learners-surv.R' [1s]
     Running 'run-lint.R' [6s]
     Running 'run-multilabel.R' [0s]
     Running 'run-parallel.R' [1s]
     Running 'run-regr.R' [1s]
     Running 'run-stack.R' [0s]
     Running 'run-surv.R' [1s]
     Running 'run-tune.R' [1s]
    Running the tests in 'tests/run-base.R' failed.
    Complete output:
     > library(testthat)
     > test_check("mlr", filter = "base_")
     Loading required package: mlr
     Loading required package: ParamHelpers
    
     Attaching package: 'rex'
    
     The following object is masked from 'package:testthat':
    
     matches
    
    
     Attaching package: 'BBmisc'
    
     The following object is masked from 'package:base':
    
     isFALSE
    
     -- 1. Error: tuning allows usage of budget (@test_base_tuning.R#120) ----------
     Assertion on 'discrete.names' failed: Must be of type 'logical flag', not 'NULL'.
     1: tuneParams(lrn, binaryclass.task, resampling = rdesc, par.set = ps, control = ctrl) at testthat/test_base_tuning.R:120
     2: sel.func(learner, task, resampling, measures, par.set, control, opt.path, show.info,
     resample.fun)
     3: sampleValue(par.set, start, trafo = FALSE)
     4: sampleValue.ParamSet(par.set, start, trafo = FALSE)
     5: lapply(par$pars, sampleValue, discrete.names = discrete.names, trafo = trafo)
     6: FUN(X[[i]], ...)
     7: sampleValue.Param(X[[i]], ...)
     8: assertFlag(discrete.names)
     9: makeAssertion(x, res, .var.name, add)
     10: mstop("Assertion on '%s' failed: %s.", var.name, res)
    
     == testthat results ===========================================================
     OK: 3581 SKIPPED: 0 FAILED: 1
     1. Error: tuning allows usage of budget (@test_base_tuning.R#120)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-devel-windows-ix86+x86_64

Version: 2.12.1
Check: tests
Result: ERROR
     Running ‘run-base.R’ [436s/441s]
    Running the tests in ‘tests/run-base.R’ failed.
    Complete output:
     > library(testthat)
     > test_check("mlr", filter = "base_")
     Loading required package: mlr
     Loading required package: ParamHelpers
    
     Attaching package: 'rex'
    
     The following object is masked from 'package:testthat':
    
     matches
    
    
     Attaching package: 'BBmisc'
    
     The following object is masked from 'package:base':
    
     isFALSE
    
     ── 1. Error: MulticlassWrapper (@test_base_MulticlassWrapper.R#23) ────────────
     For learner classif.lqa please install the following packages: lqa
     1: makeLearner("classif.lqa") at testthat/test_base_MulticlassWrapper.R:23
     2: do.call(constructor, list())
     3: (function ()
     {
     makeRLearnerClassif(cl = "classif.lqa", package = "lqa", par.set = makeParamSet(makeDiscreteLearnerParam(id = "penalty",
     values = c("adaptive.lasso", "ao", "bridge", "enet", "fused.lasso", "genet",
     "icb", "lasso", "licb", "oscar", "penalreg", "ridge", "scad", "weighted.fusion")),
     makeNumericLearnerParam(id = "lambda", lower = 0, requires = quote(penalty %in%
     c("adaptive.lasso", "ao", "bridge", "genet", "lasso", "oscar", "penalreg",
     "ridge", "scad"))), makeNumericLearnerParam(id = "gamma", lower = 1 +
     .Machine$double.eps, requires = quote(penalty %in% c("ao", "bridge",
     "genet", "weighted.fusion"))), makeNumericLearnerParam(id = "alpha",
     lower = 0, upper = 1, requires = quote(penalty == "genet")), makeNumericLearnerParam(id = "oscar.c",
     lower = 0, requires = quote(penalty == "oscar")), makeNumericLearnerParam(id = "a",
     lower = 2 + .Machine$double.eps, requires = quote(penalty == "scad")),
     makeNumericLearnerParam(id = "lambda1", lower = 0, requires = quote(penalty %in%
     c("enet", "fused.lasso", "icb", "licb", "weighted.fusion"))), makeNumericLearnerParam(id = "lambda2",
     lower = 0, requires = quote(penalty %in% c("enet", "fused.lasso", "icb",
     "licb", "weighted.fusion"))), makeDiscreteLearnerParam(id = "method",
     default = "lqa.update2", values = c("lqa.update2", "ForwardBoost", "GBlockBoost")),
     makeNumericLearnerParam(id = "var.eps", default = .Machine$double.eps, lower = 0),
     makeIntegerLearnerParam(id = "max.steps", lower = 1L, default = 5000L), makeNumericLearnerParam(id = "conv.eps",
     default = 0.001, lower = 0), makeLogicalLearnerParam(id = "conv.stop",
     default = TRUE), makeNumericLearnerParam(id = "c1", default = 1e-08,
     lower = 0), makeIntegerLearnerParam(id = "digits", default = 5L, lower = 1L)),
     properties = c("numerics", "prob", "twoclass"), par.vals = list(penalty = "lasso",
     lambda = 0.1), name = "Fitting penalized Generalized Linear Models with the LQA algorithm",
     short.name = "lqa", note = "`penalty` has been set to `\"lasso\"` and `lambda` to `0.1` by default. The parameters `lambda`, `gamma`, `alpha`, `oscar.c`, `a`, `lambda1` and `lambda2` are the tuning parameters of the `penalty` function being used, and correspond to the parameters as named in the respective help files. Parameter `c` for penalty method `oscar` has been named `oscar.c`. Parameters `lambda1` and `lambda2` correspond to the parameters named 'lambda_1' and 'lambda_2' of the penalty functions `enet`, `fused.lasso`, `icb`, `licb`, as well as `weighted.fusion`.",
     callees = c("lqa", "lqa.control", "adaptive.lasso", "ao", "bridge", "enet",
     "fused.lasso", "genet", "icb", "lasso", "licb", "oscar", "penalreg",
     "ridge", "scad", "weighted.fusion"))
     })()
     4: makeRLearnerClassif(cl = "classif.lqa", package = "lqa", par.set = makeParamSet(makeDiscreteLearnerParam(id = "penalty",
     values = c("adaptive.lasso", "ao", "bridge", "enet", "fused.lasso", "genet",
     "icb", "lasso", "licb", "oscar", "penalreg", "ridge", "scad", "weighted.fusion")),
     makeNumericLearnerParam(id = "lambda", lower = 0, requires = quote(penalty %in%
     c("adaptive.lasso", "ao", "bridge", "genet", "lasso", "oscar", "penalreg",
     "ridge", "scad"))), makeNumericLearnerParam(id = "gamma", lower = 1 +
     .Machine$double.eps, requires = quote(penalty %in% c("ao", "bridge", "genet",
     "weighted.fusion"))), makeNumericLearnerParam(id = "alpha", lower = 0, upper = 1,
     requires = quote(penalty == "genet")), makeNumericLearnerParam(id = "oscar.c",
     lower = 0, requires = quote(penalty == "oscar")), makeNumericLearnerParam(id = "a",
     lower = 2 + .Machine$double.eps, requires = quote(penalty == "scad")), makeNumericLearnerParam(id = "lambda1",
     lower = 0, requires = quote(penalty %in% c("enet", "fused.lasso", "icb",
     "licb", "weighted.fusion"))), makeNumericLearnerParam(id = "lambda2",
     lower = 0, requires = quote(penalty %in% c("enet", "fused.lasso", "icb",
     "licb", "weighted.fusion"))), makeDiscreteLearnerParam(id = "method",
     default = "lqa.update2", values = c("lqa.update2", "ForwardBoost", "GBlockBoost")),
     makeNumericLearnerParam(id = "var.eps", default = .Machine$double.eps, lower = 0),
     makeIntegerLearnerParam(id = "max.steps", lower = 1L, default = 5000L), makeNumericLearnerParam(id = "conv.eps",
     default = 0.001, lower = 0), makeLogicalLearnerParam(id = "conv.stop", default = TRUE),
     makeNumericLearnerParam(id = "c1", default = 1e-08, lower = 0), makeIntegerLearnerParam(id = "digits",
     default = 5L, lower = 1L)), properties = c("numerics", "prob", "twoclass"),
     par.vals = list(penalty = "lasso", lambda = 0.1), name = "Fitting penalized Generalized Linear Models with the LQA algorithm",
     short.name = "lqa", note = "`penalty` has been set to `\"lasso\"` and `lambda` to `0.1` by default. The parameters `lambda`, `gamma`, `alpha`, `oscar.c`, `a`, `lambda1` and `lambda2` are the tuning parameters of the `penalty` function being used, and correspond to the parameters as named in the respective help files. Parameter `c` for penalty method `oscar` has been named `oscar.c`. Parameters `lambda1` and `lambda2` correspond to the parameters named 'lambda_1' and 'lambda_2' of the penalty functions `enet`, `fused.lasso`, `icb`, `licb`, as well as `weighted.fusion`.",
     callees = c("lqa", "lqa.control", "adaptive.lasso", "ao", "bridge", "enet", "fused.lasso",
     "genet", "icb", "lasso", "licb", "oscar", "penalreg", "ridge", "scad", "weighted.fusion"))
     5: addClasses(makeRLearnerInternal(cl, "classif", package, par.set, par.vals, properties,
     name, short.name, note, callees), c(cl, "RLearnerClassif"))
     6: makeRLearnerInternal(cl, "classif", package, par.set, par.vals, properties, name,
     short.name, note, callees)
     7: requirePackages(package, why = stri_paste("learner", id, sep = " "), default.method = "load")
     8: stopf("For %s please install the following packages: %s", why, ps)
    
     ── 2. Error: plotFilterValues (@test_base_generateFilterValuesData.R#68) ──────
     write error, closing pipe to the master
     1: generateFilterValuesData(binaryclass.task, method = filter.classif) at testthat/test_base_generateFilterValuesData.R:68
     2: lapply(filter, function(x) {
     x = do.call(x$fun, c(list(task = task, nselect = nselect), more.args[[x$name]]))
     missing.score = setdiff(fn, names(x))
     x[missing.score] = NA_real_
     x[match(fn, names(x))]
     })
     3: FUN(X[[i]], ...)
     4: do.call(x$fun, c(list(task = task, nselect = nselect), more.args[[x$name]]))
     5: (function (task, nselect, method = "md", ...)
     {
     im = randomForestSRC::var.select(getTaskFormula(task), getTaskData(task), method = method,
     verbose = FALSE, ...)$md.obj$order
     setNames(-im[, 1L], rownames(im))
     })(task = structure(list(type = "classif", env = <environment>, weights = NULL, blocking = NULL,
     coordinates = NULL, task.desc = structure(list(id = "binary", type = "classif",
     target = "Class", size = 208L, n.feat = c(numerics = 60L, factors = 0L, ordered = 0L,
     functionals = 0L), has.missings = FALSE, has.weights = FALSE, has.blocking = FALSE,
     has.coordinates = FALSE, class.levels = c("M", "R"), positive = "M", negative = "R",
     class.distribution = structure(c(M = 111L, R = 97L), .Dim = 2L, .Dimnames = structure(list(
     c("M", "R")), .Names = ""), class = "table")), class = c("ClassifTaskDesc",
     "SupervisedTaskDesc", "TaskDesc"))), class = c("ClassifTask", "SupervisedTask",
     "Task")), nselect = 60L)
     6: randomForestSRC::var.select(getTaskFormula(task), getTaskData(task), method = method,
     verbose = FALSE, ...)
     7: max.subtree(rfsrc.obj, conservative = (conservative == "high"))
     8: mclapply(1:numTree, function(b) {
     subtree <- vector("list", 8)
     names(subtree) <- c("count", "order", "meanSum", "depth", "terminalDepthSum",
     "subOrder", "subOrderDiag", "nodesAtDepth")
     recursiveObject <- list(offset = min(which(nativeArray$treeID == b)), subtree = subtree,
     diagnostic = 0, diagnostic2 = 0)
     recursiveObject$subtree$nodesAtDepth <- rep(NA, MAX.DEPTH)
     recursiveObject$subtree$meanSum <- rep(NA, numParm)
     recursiveObject$subtree$order <- matrix(NA, nrow = numParm, ncol = max(max.order,
     1))
     if (sub.order) {
     recursiveObject$subtree$subOrder <- matrix(1, nrow = numParm, ncol = numParm)
     recursiveObject$subtree$subOrderDiag <- rep(NA, numParm)
     }
     recursiveObject$subtree$depth <- 0
     recursiveObject$subtree$terminalDepthSum <- 0
     recursiveObject$subtree$count <- rep(0, numParm)
     rootParmID <- nativeArray$parmID[recursiveObject$offset]
     offsetMark <- recursiveObject$offset
     stumpCnt <- 0
     recursiveObject <- rfsrcParseTree(recursiveObject, max(max.order, 1), sub.order,
     nativeArray, b, distance = 0, subtreeFlag = rep(FALSE, numParm))
     if (rootParmID != 0) {
     index <- which(recursiveObject$subtree$count == 0)
     recursiveObject$subtree$meanSum[index] <- recursiveObject$subtree$depth
     forestMeanSum <- recursiveObject$subtree$meanSum
     index <- which(is.na(recursiveObject$subtree$order))
     recursiveObject$subtree$order[index] <- recursiveObject$subtree$depth
     orderTree <- recursiveObject$subtree$order
     subtreeCountSum <- (recursiveObject$subtree$count/((recursiveObject$offset -
     offsetMark + 1)/4))
     terminalDepth <- recursiveObject$subtree$terminalDepthSum/((recursiveObject$offset -
     offsetMark + 1)/2)
     if (sub.order) {
     index <- which(recursiveObject$subtree$count > 0)
     diag(recursiveObject$subtree$subOrder)[index] <- recursiveObject$subtree$subOrderDiag[index]
     index <- which(recursiveObject$subtree$count == 0)
     diag(recursiveObject$subtree$subOrder)[index] <- recursiveObject$subtree$depth
     diag(recursiveObject$subtree$subOrder) <- diag(recursiveObject$subtree$subOrder)/recursiveObject$subtree$depth
     subOrderSum <- recursiveObject$subtree$subOrder
     }
     else {
     subOrderSum <- NULL
     }
     nodesAtDepthMatrix <- recursiveObject$subtree$nodesAtDepth
     }
     else {
     stumpCnt <- 1
     forestMeanSum <- orderTree <- subtreeCountSum <- terminalDepth <- subOrderSum <- nodesAtDepthMatrix <- NULL
     }
     return(list(forestMeanSum = forestMeanSum, orderTree = orderTree, subtreeCountSum = subtreeCountSum,
     terminalDepth = terminalDepth, subOrderSum = subOrderSum, stumpCnt = stumpCnt,
     nodesAtDepthMatrix = nodesAtDepthMatrix))
     })
     9: lapply(seq_len(cores), inner.do)
     10: FUN(X[[i]], ...)
     11: sendMaster(try(lapply(X = S, FUN = FUN, ...), silent = TRUE))
    
     ── 2. Error: plotFilterValues (@test_base_generateFilterValuesData.R#68) ──────
     argument is of length zero
     1: generateFilterValuesData(binaryclass.task, method = filter.classif) at testthat/test_base_generateFilterValuesData.R:68
     2: lapply(filter, function(x) {
     x = do.call(x$fun, c(list(task = task, nselect = nselect), more.args[[x$name]]))
     missing.score = setdiff(fn, names(x))
     x[missing.score] = NA_real_
     x[match(fn, names(x))]
     })
     3: FUN(X[[i]], ...)
     4: do.call(x$fun, c(list(task = task, nselect = nselect), more.args[[x$name]]))
     5: (function (task, nselect, method = "md", ...)
     {
     im = randomForestSRC::var.select(getTaskFormula(task), getTaskData(task), method = method,
     verbose = FALSE, ...)$md.obj$order
     setNames(-im[, 1L], rownames(im))
     })(task = structure(list(type = "classif", env = <environment>, weights = NULL, blocking = NULL,
     coordinates = NULL, task.desc = structure(list(id = "binary", type = "classif",
     target = "Class", size = 208L, n.feat = c(numerics = 60L, factors = 0L, ordered = 0L,
     functionals = 0L), has.missings = FALSE, has.weights = FALSE, has.blocking = FALSE,
     has.coordinates = FALSE, class.levels = c("M", "R"), positive = "M", negative = "R",
     class.distribution = structure(c(M = 111L, R = 97L), .Dim = 2L, .Dimnames = structure(list(
     c("M", "R")), .Names = ""), class = "table")), class = c("ClassifTaskDesc",
     "SupervisedTaskDesc", "TaskDesc"))), class = c("ClassifTask", "SupervisedTask",
     "Task")), nselect = 60L)
     6: randomForestSRC::var.select(getTaskFormula(task), getTaskData(task), method = method,
     verbose = FALSE, ...)
     7: max.subtree(rfsrc.obj, conservative = (conservative == "high"))
    
     ── 3. Error: tuning allows usage of budget (@test_base_tuning.R#120) ──────────
     Assertion on 'discrete.names' failed: Must be of type 'logical flag', not 'NULL'.
     1: tuneParams(lrn, binaryclass.task, resampling = rdesc, par.set = ps, control = ctrl) at testthat/test_base_tuning.R:120
     2: sel.func(learner, task, resampling, measures, par.set, control, opt.path, show.info,
     resample.fun)
     3: sampleValue(par.set, start, trafo = FALSE)
     4: sampleValue.ParamSet(par.set, start, trafo = FALSE)
     5: lapply(par$pars, sampleValue, discrete.names = discrete.names, trafo = trafo)
     6: FUN(X[[i]], ...)
     7: sampleValue.Param(X[[i]], ...)
     8: assertFlag(discrete.names)
     9: makeAssertion(x, res, .var.name, add)
     10: mstop("Assertion on '%s' failed: %s.", var.name, res)
    
     ══ testthat results ═══════════════════════════════════════════════════════════
     OK: 3572 SKIPPED: 0 FAILED: 3
     1. Error: MulticlassWrapper (@test_base_MulticlassWrapper.R#23)
     2. Error: plotFilterValues (@test_base_generateFilterValuesData.R#68)
     3. Error: tuning allows usage of budget (@test_base_tuning.R#120)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-patched-linux-x86_64

Version: 2.12.1
Check: package dependencies
Result: NOTE
    Packages suggested but not available for checking:
     ‘elmNN’ ‘FSelector’ ‘lqa’ ‘RWeka’
Flavor: r-patched-solaris-x86

Version: 2.12.1
Check: installed package size
Result: NOTE
     installed size is 5.1Mb
     sub-directories of 1Mb or more:
     R 1.5Mb
     data 2.4Mb
Flavor: r-patched-solaris-x86

Version: 2.12.1
Check: tests
Result: ERROR
     Running ‘run-base.R’ [11m/13m]
     Running ‘run-basenocran.R’
     Running ‘run-classif1.R’
     Running ‘run-classif2.R’
     Running ‘run-cluster.R’
     Running ‘run-featsel.R’
     Running ‘run-learners-classif.R’
     Running ‘run-learners-classiflabelswitch.R’
     Running ‘run-learners-cluster.R’
     Running ‘run-learners-general.R’
     Running ‘run-learners-multilabel.R’
     Running ‘run-learners-regr.R’
     Running ‘run-learners-surv.R’
     Running ‘run-lint.R’
     Running ‘run-multilabel.R’
     Running ‘run-parallel.R’
     Running ‘run-regr.R’
     Running ‘run-stack.R’
     Running ‘run-surv.R’
     Running ‘run-tune.R’
    Running the tests in ‘tests/run-base.R’ failed.
    Complete output:
     > library(testthat)
     > test_check("mlr", filter = "base_")
     Loading required package: mlr
     Loading required package: ParamHelpers
    
     Attaching package: 'rex'
    
     The following object is masked from 'package:testthat':
    
     matches
    
    
     Attaching package: 'BBmisc'
    
     The following object is masked from 'package:base':
    
     isFALSE
    
     ── 1. Error: BaggingWrapper with glmnet (#958) (@test_base_BaggingWrapper.R#71)
     need at least two non-NA values to interpolate
     1: predict(mod, multiclass.task) at testthat/test_base_BaggingWrapper.R:71
     2: predict.WrappedModel(mod, multiclass.task)
     3: measureTime(fun1({
     p = fun2(fun3(do.call(predictLearner2, pars)))
     }))
     4: force(expr)
     5: fun1({
     p = fun2(fun3(do.call(predictLearner2, pars)))
     })
     6: evalVis(expr)
     7: withVisible(eval(expr, pf))
     8: eval(expr, pf)
     9: eval(expr, pf)
     10: fun2(fun3(do.call(predictLearner2, pars)))
     ...
     32: fun3(do.call(predictLearner2, pars))
     33: do.call(predictLearner2, pars)
     34: (function (.learner, .model, .newdata, ...)
     {
     if (.learner$fix.factors.prediction) {
     fls = .model$factor.levels
     ns = names(fls)
     ns = intersect(colnames(.newdata), ns)
     fls = fls[ns]
     if (length(ns) > 0L)
     .newdata[ns] = mapply(factor, x = .newdata[ns], levels = fls, SIMPLIFY = FALSE)
     }
     p = predictLearner(.learner, .model, .newdata, ...)
     p = checkPredictLearnerOutput(.learner, .model, p)
     return(p)
     })(.learner = structure(list(id = "classif.glmnet", type = "classif", package = "glmnet",
     properties = c("numerics", "factors", "prob", "twoclass", "multiclass", "weights"
     ), par.set = structure(list(pars = list(alpha = structure(list(id = "alpha",
     type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = TRUE, default = 1, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), s = structure(list(id = "s", type = "numeric", len = 1L, lower = 0,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "predict"), class = c("LearnerParam", "Param")), exact = structure(list(
     id = "exact", type = "logical", len = 1L, lower = NULL, upper = NULL, values = list(
     `TRUE` = TRUE, `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = FALSE, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "predict"), class = c("LearnerParam", "Param")), nlambda = structure(list(
     id = "nlambda", type = "integer", len = 1L, lower = 1L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 100L, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), lambda.min.ratio = structure(list(id = "lambda.min.ratio", type = "numeric",
     len = 1L, lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), lambda = structure(list(id = "lambda", type = "numericvector", len = NA_integer_,
     lower = 0, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), standardize = structure(list(id = "standardize", type = "logical", len = 1L,
     lower = NULL, upper = NULL, values = list(`TRUE` = TRUE, `FALSE` = FALSE),
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = TRUE, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), intercept = structure(list(id = "intercept", type = "logical", len = 1L,
     lower = NULL, upper = NULL, values = list(`TRUE` = TRUE, `FALSE` = FALSE),
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = TRUE, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), thresh = structure(list(id = "thresh", type = "numeric", len = 1L,
     lower = 0, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-07, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), dfmax = structure(list(id = "dfmax", type = "integer", len = 1L, lower = 0L,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), pmax = structure(list(
     id = "pmax", type = "integer", len = 1L, lower = 0L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), exclude = structure(list(id = "exclude", type = "integervector", len = NA_integer_,
     lower = 1L, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), penalty.factor = structure(list(id = "penalty.factor", type = "numericvector",
     len = NA_integer_, lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), lower.limits = structure(list(id = "lower.limits", type = "numericvector",
     len = NA_integer_, lower = -Inf, upper = 0, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), upper.limits = structure(list(id = "upper.limits", type = "numericvector",
     len = NA_integer_, lower = 0, upper = Inf, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), maxit = structure(list(id = "maxit", type = "integer", len = 1L, lower = 1L,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 100000L, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), type.logistic = structure(list(
     id = "type.logistic", type = "discrete", len = 1L, lower = NULL, upper = NULL,
     values = list(Newton = "Newton", modified.Newton = "modified.Newton"), cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), type.multinomial = structure(list(id = "type.multinomial", type = "discrete",
     len = 1L, lower = NULL, upper = NULL, values = list(ungrouped = "ungrouped",
     grouped = "grouped"), cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), fdev = structure(list(
     id = "fdev", type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1e-05, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), devmax = structure(list(id = "devmax", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 0.999, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), eps = structure(list(
     id = "eps", type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1e-06, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), big = structure(list(id = "big", type = "numeric", len = 1L, lower = -Inf,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 9.9e+35, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), mnlam = structure(list(
     id = "mnlam", type = "integer", len = 1L, lower = 1, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 5, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), pmin = structure(list(id = "pmin", type = "numeric", len = 1L, lower = 0,
     upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 1e-09, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), exmx = structure(list(
     id = "exmx", type = "numeric", len = 1L, lower = -Inf, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 250, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), prec = structure(list(id = "prec", type = "numeric", len = 1L, lower = -Inf,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 1e-10, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), mxit = structure(list(
     id = "mxit", type = "integer", len = 1L, lower = 1L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 100L, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param"))), forbidden = NULL), class = c("LearnerParamSet", "ParamSet")), par.vals = list(
     s = 0.01), predict.type = "response", name = "GLM with Lasso or Elasticnet Regularization",
     short.name = "glmnet", note = "The family parameter is set to `binomial` for two-class problems and to `multinomial` otherwise.\n Factors automatically get converted to dummy columns, ordered factors to integer.\n Parameter `s` (value of the regularization parameter used for predictions) is set to `0.1` by default,\n but needs to be tuned by the user.\n glmnet uses a global control object for its parameters. mlr resets all control parameters to their defaults\n before setting the specified parameters and after training.\n If you are setting glmnet.control parameters through glmnet.control,\n you need to save and re-set them after running the glmnet learner.",
     callees = c("glmnet", "glmnet.control", "predict.glmnet"), help.list = list(s = "Argument of: glmnet::predict.glmnet\n\nValue(s) of the penalty parameter lambda at which predictions are required. Default is the entire sequence used to create the model.",
     exact = "Argument of: glmnet::predict.glmnet\n\nThis argument is relevant only when predictions are made at values of s (lambda) different from those used in the fitting of the original model. If exact=FALSE (default), then the predict function uses linear interpolation to make predictions for values of s (lambda) that do not coincide with those used in the fitting algorithm. While this is often a good approximation, it can sometimes be a bit coarse. With exact=TRUE, these different values of s are merged (and sorted) with object$lambda, and the model is refit before predictions are made. In this case, it is required to supply the original data x= and y= as additional named arguments to predict() or coef(). The workhorse predict.glmnet() needs to update the model, and so needs the data used to create it. The same is true of weights, offset, penalty.factor, lower.limits, upper.limits if these were used in the original call. Failure to do so will result in an error.",
     fdev = "Argument of: glmnet::glmnet.control\n\nminimum fractional change in deviance for stopping path; factory default = 1.0e-5",
     devmax = "Argument of: glmnet::glmnet.control\n\nmaximum fraction of explained deviance for stopping path; factory default = 0.999",
     eps = "Argument of: glmnet::glmnet.control\n\nminimum value of lambda.min.ratio (see glmnet); factory default= 1.0e-6",
     big = "Argument of: glmnet::glmnet.control\n\nlarge floating point number; factory default = 9.9e35. Inf in definition of upper.limit is set to big",
     mnlam = "Argument of: glmnet::glmnet.control\n\nminimum number of path points (lambda values) allowed; factory default = 5",
     pmin = "Argument of: glmnet::glmnet.control\n\nminimum probability for any class. factory default = 1.0e-9. Note that this implies a pmax of 1-pmin.",
     exmx = "Argument of: glmnet::glmnet.control\n\nmaximum allowed exponent. factory default = 250.0",
     prec = "Argument of: glmnet::glmnet.control\n\nconvergence threshold for multi response bounds adjustment solution. factory default = 1.0e-10",
     mxit = "Argument of: glmnet::glmnet.control\n\nmaximum iterations for multiresponse bounds adjustment solution. factory default = 100",
     alpha = "Argument of: glmnet::glmnet\n\nThe elasticnet mixing parameter, with 0≤α≤ 1. The penalty is defined as (1-α)/2||β||_2^2+α||β||_1. alpha=1 is the lasso penalty, and alpha=0 the ridge penalty.",
     nlambda = "Argument of: glmnet::glmnet\n\nThe number of lambda values - default is 100.",
     lambda.min.ratio = "Argument of: glmnet::glmnet\n\nSmallest value for lambda, as a fraction of lambda.max, the (data derived) entry value (i.e. the smallest value for which all coefficients are zero). The default depends on the sample size nobs relative to the number of variables nvars. If nobs > nvars, the default is 0.0001, close to zero. If nobs < nvars, the default is 0.01. A very small value of lambda.min.ratio will lead to a saturated fit in the nobs < nvars case. This is undefined for \"binomial\" and \"multinomial\" models, and glmnet will exit gracefully when the percentage deviance explained is almost 1.",
     lambda = "Argument of: glmnet::glmnet\n\nA user supplied lambda sequence. Typical usage is to have the program compute its own lambda sequence based on nlambda and lambda.min.ratio. Supplying a value of lambda overrides this. WARNING: use with care. Avoid supplying a single value for lambda (for predictions after CV use predict() instead). Supply instead a decreasing sequence of lambda values. glmnet relies on its warms starts for speed, and its often faster to fit a whole path than compute a single fit.",
     standardize = "Argument of: glmnet::glmnet\n\nLogical flag for x variable standardization, prior to fitting the model sequence. The coefficients are always returned on the original scale. Default is standardize=TRUE. If variables are in the same units already, you might not wish to standardize. See details below for y standardization with family=\"gaussian\".",
     intercept = "Argument of: glmnet::glmnet\n\nShould intercept(s) be fitted (default=TRUE) or set to zero (FALSE)",
     thresh = "Argument of: glmnet::glmnet\n\nConvergence threshold for coordinate descent. Each inner coordinate-descent loop continues until the maximum change in the objective after any coefficient update is less than thresh times the null deviance. Defaults value is 1E-7.",
     dfmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables in the model. Useful for very large nvars, if a partial path is desired.",
     pmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables ever to be nonzero",
     exclude = "Argument of: glmnet::glmnet\n\nIndices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor (next item).",
     penalty.factor = "Argument of: glmnet::glmnet\n\nSeparate penalty factors can be applied to each coefficient. This is a number that multiplies lambda to allow differential shrinkage. Can be 0 for some variables, which implies no shrinkage, and that variable is always included in the model. Default is 1 for all variables (and implicitly infinity for variables listed in exclude). Note: the penalty factors are internally rescaled to sum to nvars, and the lambda sequence will reflect this change.",
     lower.limits = "Argument of: glmnet::glmnet\n\nVector of lower limits for each coefficient; default -Inf. Each of these must be non-positive. Can be presented as a single value (which will then be replicated), else a vector of length nvars",
     upper.limits = "Argument of: glmnet::glmnet\n\nVector of upper limits for each coefficient; default Inf. See lower.limits",
     maxit = "Argument of: glmnet::glmnet\n\nMaximum number of passes over the data for all lambda values; default is 10^5.",
     type.logistic = "Argument of: glmnet::glmnet\n\nIf \"Newton\" then the exact hessian is used (default), while \"modified.Newton\" uses an upper-bound on the hessian, and can be faster.",
     type.multinomial = "Argument of: glmnet::glmnet\n\nIf \"grouped\" then a grouped lasso penalty is used on the multinomial coefficients for a variable. This ensures they are all in our out together. The default is \"ungrouped\""),
     config = list(), fix.factors.prediction = FALSE), class = c("classif.glmnet",
     "RLearnerClassif", "RLearner", "Learner")), .model = structure(list(learner = structure(list(
     id = "classif.glmnet", type = "classif", package = "glmnet", properties = c("numerics",
     "factors", "prob", "twoclass", "multiclass", "weights"), par.set = structure(list(
     pars = list(alpha = structure(list(id = "alpha", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), s = structure(list(id = "s", type = "numeric", len = 1L, lower = 0, upper = Inf,
     values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "predict"), class = c("LearnerParam", "Param")), exact = structure(list(
     id = "exact", type = "logical", len = 1L, lower = NULL, upper = NULL,
     values = list(`TRUE` = TRUE, `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = FALSE, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "predict"), class = c("LearnerParam", "Param"
     )), nlambda = structure(list(id = "nlambda", type = "integer", len = 1L,
     lower = 1L, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 100L, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), lambda.min.ratio = structure(list(id = "lambda.min.ratio", type = "numeric",
     len = 1L, lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), lambda = structure(list(id = "lambda", type = "numericvector", len = NA_integer_,
     lower = 0, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), standardize = structure(list(id = "standardize", type = "logical", len = 1L,
     lower = NULL, upper = NULL, values = list(`TRUE` = TRUE, `FALSE` = FALSE),
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = TRUE,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), intercept = structure(list(
     id = "intercept", type = "logical", len = 1L, lower = NULL, upper = NULL,
     values = list(`TRUE` = TRUE, `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = TRUE, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), thresh = structure(list(id = "thresh", type = "numeric", len = 1L, lower = 0,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 1e-07, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), dfmax = structure(list(
     id = "dfmax", type = "integer", len = 1L, lower = 0L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), pmax = structure(list(
     id = "pmax", type = "integer", len = 1L, lower = 0L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), exclude = structure(list(
     id = "exclude", type = "integervector", len = NA_integer_, lower = 1L,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), penalty.factor = structure(list(
     id = "penalty.factor", type = "numericvector", len = NA_integer_, lower = 0,
     upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), lower.limits = structure(list(
     id = "lower.limits", type = "numericvector", len = NA_integer_, lower = -Inf,
     upper = 0, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), upper.limits = structure(list(
     id = "upper.limits", type = "numericvector", len = NA_integer_, lower = 0,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), maxit = structure(list(
     id = "maxit", type = "integer", len = 1L, lower = 1L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 100000L,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), type.logistic = structure(list(
     id = "type.logistic", type = "discrete", len = 1L, lower = NULL, upper = NULL,
     values = list(Newton = "Newton", modified.Newton = "modified.Newton"),
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), type.multinomial = structure(list(
     id = "type.multinomial", type = "discrete", len = 1L, lower = NULL, upper = NULL,
     values = list(ungrouped = "ungrouped", grouped = "grouped"), cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), fdev = structure(list(id = "fdev", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-05, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), devmax = structure(list(id = "devmax", type = "numeric", len = 1L, lower = 0,
     upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 0.999, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), eps = structure(list(
     id = "eps", type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1e-06,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), big = structure(list(
     id = "big", type = "numeric", len = 1L, lower = -Inf, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 9.9e+35,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), mnlam = structure(list(
     id = "mnlam", type = "integer", len = 1L, lower = 1, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 5, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), pmin = structure(list(id = "pmin", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-09, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), exmx = structure(list(id = "exmx", type = "numeric", len = 1L, lower = -Inf,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 250, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), prec = structure(list(
     id = "prec", type = "numeric", len = 1L, lower = -Inf, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1e-10,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), mxit = structure(list(
     id = "mxit", type = "integer", len = 1L, lower = 1L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 100L,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param"))), forbidden = NULL), class = c("LearnerParamSet",
     "ParamSet")), par.vals = list(s = 0.01), predict.type = "response", name = "GLM with Lasso or Elasticnet Regularization",
     short.name = "glmnet", note = "The family parameter is set to `binomial` for two-class problems and to `multinomial` otherwise.\n Factors automatically get converted to dummy columns, ordered factors to integer.\n Parameter `s` (value of the regularization parameter used for predictions) is set to `0.1` by default,\n but needs to be tuned by the user.\n glmnet uses a global control object for its parameters. mlr resets all control parameters to their defaults\n before setting the specified parameters and after training.\n If you are setting glmnet.control parameters through glmnet.control,\n you need to save and re-set them after running the glmnet learner.",
     callees = c("glmnet", "glmnet.control", "predict.glmnet"), help.list = list(s = "Argument of: glmnet::predict.glmnet\n\nValue(s) of the penalty parameter lambda at which predictions are required. Default is the entire sequence used to create the model.",
     exact = "Argument of: glmnet::predict.glmnet\n\nThis argument is relevant only when predictions are made at values of s (lambda) different from those used in the fitting of the original model. If exact=FALSE (default), then the predict function uses linear interpolation to make predictions for values of s (lambda) that do not coincide with those used in the fitting algorithm. While this is often a good approximation, it can sometimes be a bit coarse. With exact=TRUE, these different values of s are merged (and sorted) with object$lambda, and the model is refit before predictions are made. In this case, it is required to supply the original data x= and y= as additional named arguments to predict() or coef(). The workhorse predict.glmnet() needs to update the model, and so needs the data used to create it. The same is true of weights, offset, penalty.factor, lower.limits, upper.limits if these were used in the original call. Failure to do so will result in an error.",
     fdev = "Argument of: glmnet::glmnet.control\n\nminimum fractional change in deviance for stopping path; factory default = 1.0e-5",
     devmax = "Argument of: glmnet::glmnet.control\n\nmaximum fraction of explained deviance for stopping path; factory default = 0.999",
     eps = "Argument of: glmnet::glmnet.control\n\nminimum value of lambda.min.ratio (see glmnet); factory default= 1.0e-6",
     big = "Argument of: glmnet::glmnet.control\n\nlarge floating point number; factory default = 9.9e35. Inf in definition of upper.limit is set to big",
     mnlam = "Argument of: glmnet::glmnet.control\n\nminimum number of path points (lambda values) allowed; factory default = 5",
     pmin = "Argument of: glmnet::glmnet.control\n\nminimum probability for any class. factory default = 1.0e-9. Note that this implies a pmax of 1-pmin.",
     exmx = "Argument of: glmnet::glmnet.control\n\nmaximum allowed exponent. factory default = 250.0",
     prec = "Argument of: glmnet::glmnet.control\n\nconvergence threshold for multi response bounds adjustment solution. factory default = 1.0e-10",
     mxit = "Argument of: glmnet::glmnet.control\n\nmaximum iterations for multiresponse bounds adjustment solution. factory default = 100",
     alpha = "Argument of: glmnet::glmnet\n\nThe elasticnet mixing parameter, with 0≤α≤ 1. The penalty is defined as (1-α)/2||β||_2^2+α||β||_1. alpha=1 is the lasso penalty, and alpha=0 the ridge penalty.",
     nlambda = "Argument of: glmnet::glmnet\n\nThe number of lambda values - default is 100.",
     lambda.min.ratio = "Argument of: glmnet::glmnet\n\nSmallest value for lambda, as a fraction of lambda.max, the (data derived) entry value (i.e. the smallest value for which all coefficients are zero). The default depends on the sample size nobs relative to the number of variables nvars. If nobs > nvars, the default is 0.0001, close to zero. If nobs < nvars, the default is 0.01. A very small value of lambda.min.ratio will lead to a saturated fit in the nobs < nvars case. This is undefined for \"binomial\" and \"multinomial\" models, and glmnet will exit gracefully when the percentage deviance explained is almost 1.",
     lambda = "Argument of: glmnet::glmnet\n\nA user supplied lambda sequence. Typical usage is to have the program compute its own lambda sequence based on nlambda and lambda.min.ratio. Supplying a value of lambda overrides this. WARNING: use with care. Avoid supplying a single value for lambda (for predictions after CV use predict() instead). Supply instead a decreasing sequence of lambda values. glmnet relies on its warms starts for speed, and its often faster to fit a whole path than compute a single fit.",
     standardize = "Argument of: glmnet::glmnet\n\nLogical flag for x variable standardization, prior to fitting the model sequence. The coefficients are always returned on the original scale. Default is standardize=TRUE. If variables are in the same units already, you might not wish to standardize. See details below for y standardization with family=\"gaussian\".",
     intercept = "Argument of: glmnet::glmnet\n\nShould intercept(s) be fitted (default=TRUE) or set to zero (FALSE)",
     thresh = "Argument of: glmnet::glmnet\n\nConvergence threshold for coordinate descent. Each inner coordinate-descent loop continues until the maximum change in the objective after any coefficient update is less than thresh times the null deviance. Defaults value is 1E-7.",
     dfmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables in the model. Useful for very large nvars, if a partial path is desired.",
     pmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables ever to be nonzero",
     exclude = "Argument of: glmnet::glmnet\n\nIndices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor (next item).",
     penalty.factor = "Argument of: glmnet::glmnet\n\nSeparate penalty factors can be applied to each coefficient. This is a number that multiplies lambda to allow differential shrinkage. Can be 0 for some variables, which implies no shrinkage, and that variable is always included in the model. Default is 1 for all variables (and implicitly infinity for variables listed in exclude). Note: the penalty factors are internally rescaled to sum to nvars, and the lambda sequence will reflect this change.",
     lower.limits = "Argument of: glmnet::glmnet\n\nVector of lower limits for each coefficient; default -Inf. Each of these must be non-positive. Can be presented as a single value (which will then be replicated), else a vector of length nvars",
     upper.limits = "Argument of: glmnet::glmnet\n\nVector of upper limits for each coefficient; default Inf. See lower.limits",
     maxit = "Argument of: glmnet::glmnet\n\nMaximum number of passes over the data for all lambda values; default is 10^5.",
     type.logistic = "Argument of: glmnet::glmnet\n\nIf \"Newton\" then the exact hessian is used (default), while \"modified.Newton\" uses an upper-bound on the hessian, and can be faster.",
     type.multinomial = "Argument of: glmnet::glmnet\n\nIf \"grouped\" then a grouped lasso penalty is used on the multinomial coefficients for a variable. This ensures they are all in our out together. The default is \"ungrouped\""),
     config = list(), fix.factors.prediction = FALSE), class = c("classif.glmnet",
     "RLearnerClassif", "RLearner", "Learner")), learner.model = structure(list(a0 = structure(c(0.0199359873032928,
     -0.0200693473104062, 0.000133360007113439, 0.019935987303293, -0.0200693473104062,
     0.000133360007113242, 0.0199359873032929, -0.0200693473104063, 0.000133360007113373,
     0.0199359873032929, -0.0200693473104062, 0.00013336000711324, 0.0199359873032928,
     -0.0200693473104065, 0.000133360007113695), .Dim = c(3L, 5L), .Dimnames = list(c("setosa",
     "versicolor", "virginica"), c("s0", "s1", "s2", "s3", "s4"))), beta = list(setosa = new("dgCMatrix",
     i = c(0L, 0L, 0L, 0L, 0L), p = 0:5, Dim = 4:5, Dimnames = list(c("Petal.Width",
     "Sepal.Width", "Sepal.Length", "Petal.Length"), c("s0", "s1", "s2", "s3", "s4"
     )), x = c(0, 0, 0, 0, 0), factors = list()), versicolor = new("dgCMatrix", i = c(0L,
     0L, 0L, 0L, 0L), p = 0:5, Dim = 4:5, Dimnames = list(c("Petal.Width", "Sepal.Width",
     "Sepal.Length", "Petal.Length"), c("s0", "s1", "s2", "s3", "s4")), x = c(0, 0, 0,
     0, 0), factors = list()), virginica = new("dgCMatrix", i = c(0L, 0L, 0L, 0L, 0L),
     p = 0:5, Dim = 4:5, Dimnames = list(c("Petal.Width", "Sepal.Width", "Sepal.Length",
     "Petal.Length"), c("s0", "s1", "s2", "s3", "s4")), x = c(0, 0, 0, 0, 0), factors = list())),
     dfmat = structure(c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(3L,
     5L), .Dimnames = list(c("setosa", "versicolor", "virginica"), c("s0", "s1", "s2",
     "s3", "s4"))), df = function (x, df1, df2, ncp, log = FALSE)
     {
     if (missing(ncp))
     .Call(C_df, x, df1, df2, log)
     else .Call(C_dnf, x, df1, df2, ncp, log)
     }, dim = 4:5, lambda = c(NaN, NaN, NaN, NaN, NaN), dev.ratio = c(-8.71128973871585e-16,
     -4.67839489706697e-16, -8.71128973871585e-16, -8.73102980157267e-16, -8.72115977014426e-16
     ), nulldev = 329.54368393334, npasses = 18L, jerr = 0L, offset = FALSE, classnames = c("setosa",
     "versicolor", "virginica"), grouped = FALSE, call = (function (x, y, family = c("gaussian",
     "binomial", "poisson", "multinomial", "cox", "mgaussian"), weights, offset = NULL,
     alpha = 1, nlambda = 100, lambda.min.ratio = ifelse(nobs < nvars, 0.01, 1e-04),
     lambda = NULL, standardize = TRUE, intercept = TRUE, thresh = 1e-07, dfmax = nvars +
     1, pmax = min(dfmax * 2 + 20, nvars), exclude, penalty.factor = rep(1,
     nvars), lower.limits = -Inf, upper.limits = Inf, maxit = 1e+05, type.gaussian = ifelse(nvars <
     500, "covariance", "naive"), type.logistic = c("Newton", "modified.Newton"),
     standardize.response = FALSE, type.multinomial = c("ungrouped", "grouped"))
     {
     family = match.arg(family)
     if (alpha > 1) {
     warning("alpha >1; set to 1")
     alpha = 1
     }
     if (alpha < 0) {
     warning("alpha<0; set to 0")
     alpha = 0
     }
     alpha = as.double(alpha)
     this.call = match.call()
     nlam = as.integer(nlambda)
     y = drop(y)
     np = dim(x)
     if (is.null(np) | (np[2] <= 1))
     stop("x should be a matrix with 2 or more columns")
     nobs = as.integer(np[1])
     if (missing(weights))
     weights = rep(1, nobs)
     else if (length(weights) != nobs)
     stop(paste("number of elements in weights (", length(weights), ") not equal to the number of rows of x (",
     nobs, ")", sep = ""))
     nvars = as.integer(np[2])
     dimy = dim(y)
     nrowy = ifelse(is.null(dimy), length(y), dimy[1])
     if (nrowy != nobs)
     stop(paste("number of observations in y (", nrowy, ") not equal to the number of rows of x (",
     nobs, ")", sep = ""))
     vnames = colnames(x)
     if (is.null(vnames))
     vnames = paste("V", seq(nvars), sep = "")
     ne = as.integer(dfmax)
     nx = as.integer(pmax)
     if (missing(exclude))
     exclude = integer(0)
     if (any(penalty.factor == Inf)) {
     exclude = c(exclude, seq(nvars)[penalty.factor == Inf])
     exclude = sort(unique(exclude))
     }
     if (length(exclude) > 0) {
     jd = match(exclude, seq(nvars), 0)
     if (!all(jd > 0))
     stop("Some excluded variables out of range")
     penalty.factor[jd] = 1
     jd = as.integer(c(length(jd), jd))
     }
     else jd = as.integer(0)
     vp = as.double(penalty.factor)
     internal.parms = glmnet.control()
     if (any(lower.limits > 0)) {
     stop("Lower limits should be non-positive")
     }
     if (any(upper.limits < 0)) {
     stop("Upper limits should be non-negative")
     }
     lower.limits[lower.limits == -Inf] = -internal.parms$big
     upper.limits[upper.limits == Inf] = internal.parms$big
     if (length(lower.limits) < nvars) {
     if (length(lower.limits) == 1)
     lower.limits = rep(lower.limits, nvars)
     else stop("Require length 1 or nvars lower.limits")
     }
     else lower.limits = lower.limits[seq(nvars)]
     if (length(upper.limits) < nvars) {
     if (length(upper.limits) == 1)
     upper.limits = rep(upper.limits, nvars)
     else stop("Require length 1 or nvars upper.limits")
     }
     else upper.limits = upper.limits[seq(nvars)]
     cl = rbind(lower.limits, upper.limits)
     if (any(cl == 0)) {
     fdev = glmnet.control()$fdev
     if (fdev != 0) {
     glmnet.control(fdev = 0)
     on.exit(glmnet.control(fdev = fdev))
     }
     }
     storage.mode(cl) = "double"
     isd = as.integer(standardize)
     intr = as.integer(intercept)
     if (!missing(intercept) && family == "cox")
     warning("Cox model has no intercept")
     jsd = as.integer(standardize.response)
     thresh = as.double(thresh)
     if (is.null(lambda)) {
     if (lambda.min.ratio >= 1)
     stop("lambda.min.ratio should be less than 1")
     flmin = as.double(lambda.min.ratio)
     ulam = double(1)
     }
     else {
     flmin = as.double(1)
     if (any(lambda < 0))
     stop("lambdas should be non-negative")
     ulam = as.double(rev(sort(lambda)))
     nlam = as.integer(length(lambda))
     }
     is.sparse = FALSE
     ix = jx = NULL
     if (inherits(x, "sparseMatrix")) {
     is.sparse = TRUE
     x = as(x, "CsparseMatrix")
     x = as(x, "dgCMatrix")
     ix = as.integer(x@p + 1)
     jx = as.integer(x@i + 1)
     x = as.double(x@x)
     }
     kopt = switch(match.arg(type.logistic), Newton = 0, modified.Newton = 1)
     if (family == "multinomial") {
     type.multinomial = match.arg(type.multinomial)
     if (type.multinomial == "grouped")
     kopt = 2
     }
     kopt = as.integer(kopt)
     fit = switch(family, gaussian = elnet(x, is.sparse, ix, jx, y, weights, offset,
     type.gaussian, alpha, nobs, nvars, jd, vp, cl, ne, nx, nlam, flmin, ulam,
     thresh, isd, intr, vnames, maxit), poisson = fishnet(x, is.sparse, ix,
     jx, y, weights, offset, alpha, nobs, nvars, jd, vp, cl, ne, nx, nlam,
     flmin, ulam, thresh, isd, intr, vnames, maxit), binomial = lognet(x,
     is.sparse, ix, jx, y, weights, offset, alpha, nobs, nvars, jd, vp, cl,
     ne, nx, nlam, flmin, ulam, thresh, isd, intr, vnames, maxit, kopt, family),
     multinomial = lognet(x, is.sparse, ix, jx, y, weights, offset, alpha,
     nobs, nvars, jd, vp, cl, ne, nx, nlam, flmin, ulam, thresh, isd,
     intr, vnames, maxit, kopt, family), cox = coxnet(x, is.sparse, ix,
     jx, y, weights, offset, alpha, nobs, nvars, jd, vp, cl, ne, nx, nlam,
     flmin, ulam, thresh, isd, vnames, maxit), mgaussian = mrelnet(x,
     is.sparse, ix, jx, y, weights, offset, alpha, nobs, nvars, jd, vp,
     cl, ne, nx, nlam, flmin, ulam, thresh, isd, jsd, intr, vnames, maxit))
     if (is.null(lambda))
     fit$lambda = fix.lam(fit$lambda)
     fit$call = this.call
     fit$nobs = nobs
     class(fit) = c(class(fit), "glmnet")
     fit
     })(x = structure(c(0.6, 2.3, 1.5, 2.2, 2.3, 0.3, 1, 1.5, 1.2, 1.5, 2.3, 1.5,
     1.9, 1.6, 0.4, 1.4, 0.2, 0.3, 0.2, 2.3, 1.5, 1.8, 1.3, 1.8, 1.1, 1.7, 1, 1.3,
     0.6, 0.2, 2.5, 2.3, 1.8, 1.5, 0.2, 1.3, 2, 0.1, 0.2, 0.2, 0.4, 1, 1, 1.3, 0.2,
     0.2, 0.2, 1.1, 0.2, 2.1, 0.3, 1.5, 1.5, 0.3, 1.5, 0.2, 0.3, 2.1, 1.4, 1.6, 1.3,
     0.2, 1, 0.3, 2, 1, 2, 2, 1.5, 1.4, 2, 1.3, 1.7, 0.2, 1.3, 0.2, 1.6, 1.4, 1.5,
     0.4, 0.2, 2.5, 1, 2.3, 0.4, 1.4, 2, 1.5, 2.2, 0.4, 0.3, 1.3, 1.5, 1.1, 0.2, 0.2,
     2.2, 0.2, 1.8, 1.4, 1.3, 0.2, 1.2, 2.3, 1.5, 1.5, 1.8, 1.4, 1.5, 0.2, 2.4, 0.3,
     0.1, 1.9, 1.8, 0.4, 1.2, 2.3, 1.3, 1, 1.3, 0.2, 0.2, 0.1, 1.3, 2, 0.5, 0.1, 0.4,
     1.8, 1.2, 1.5, 2.5, 2, 1.5, 1.1, 1.8, 2.2, 1.9, 1.4, 0.2, 1.5, 0.2, 0.2, 1.8,
     0.2, 0.2, 0.2, 0.2, 2.5, 3.5, 2.6, 3, 2.8, 3.1, 3.4, 2.6, 2.8, 2.7, 2.2, 3.2,
     2.2, 2.7, 3.4, 4.4, 2.6, 3.5, 3.4, 3.3, 3.2, 2.8, 2.9, 2.9, 3, 2.5, 2.5, 2.4,
     2.5, 3.5, 3.6, 3.3, 3, 2.9, 2.2, 3.1, 2.8, 2.5, 4.1, 3.2, 3.1, 3.7, 2.2, 2.2,
     2.8, 3.6, 3.4, 3.1, 2.5, 3.4, 2.8, 3.4, 3, 2.2, 3.8, 3, 3.1, 3.8, 3, 2.6, 3.3,
     2.8, 4, 2.4, 2.3, 2.8, 2.7, 2.8, 2.8, 2.2, 3.1, 2.5, 2.7, 2.5, 3.5, 2.8, 4.2,
     3.3, 3, 3.1, 3.9, 3.5, 3.3, 2.2, 2.6, 4.4, 3.1, 3, 2.8, 2.8, 3.4, 3.8, 2.9, 3.2,
     2.5, 3.7, 3.4, 3.8, 4, 3.2, 2.8, 2.5, 3.3, 2.8, 3.2, 2.5, 2.8, 3.1, 3, 3, 3.6,
     3.1, 3, 3.1, 2.7, 2.5, 3.7, 2.7, 3.2, 2.3, 2, 2.9, 3.2, 3.8, 4.1, 2.8, 3, 3.3,
     3, 3.7, 2.9, 2.6, 2.8, 3.3, 3.2, 2.9, 2.5, 2.7, 3.8, 2.5, 3.1, 3.8, 3, 3, 3.5,
     2.8, 3.1, 3.2, 3.4, 3.5, 3.6, 5, 7.7, 5.9, 6.4, 6.9, 4.6, 5.7, 6.3, 5.8, 6.2,
     6.8, 6.2, 5.8, 6, 5.7, 6.1, 5.5, 4.6, 5, 6.8, 6.3, 6.3, 5.7, 5.9, 5.1, 4.9, 5.5,
     5.5, 5, 4.6, 6.7, 7.7, 6.3, 6, 4.6, 6.1, 5.7, 5.2, 4.6, 4.9, 5.1, 6, 6, 5.7,
     4.6, 5.4, 4.9, 5.6, 5.1, 6.4, 4.6, 5.6, 6, 5.7, 5.4, 4.8, 5.1, 6.8, 6.1, 6.3,
     5.7, 5.8, 4.9, 4.5, 7.7, 5.8, 5.6, 5.6, 6, 6.7, 5.7, 5.6, 4.9, 5.1, 6.1, 5.5,
     6.3, 6.1, 6.9, 5.4, 5.5, 6.3, 6, 7.7, 5.7, 6.7, 6.5, 6.3, 6.4, 5, 5.1, 6.2, 6.4,
     5.1, 5.3, 5.2, 7.7, 5.8, 5.9, 6.8, 5.5, 5, 6.1, 6.8, 6.3, 6.3, 6.4, 6.1, 5.9,
     4.6, 6.7, 4.8, 4.9, 5.8, 6.7, 5.1, 5.8, 6.8, 6.3, 5, 6.2, 4.6, 5.1, 5.2, 5.7,
     6.5, 5.1, 4.3, 5.1, 6.3, 5.8, 6.3, 6.3, 6.5, 6, 5.1, 6.3, 7.7, 6.3, 6.7, 5.1,
     5.9, 4.9, 5.2, 6.2, 4.9, 5, 4.8, 5.5, 7.2, 1.6, 6.9, 4.2, 5.6, 5.1, 1.4, 3.5,
     5.1, 3.9, 4.5, 5.9, 4.5, 5.1, 4.5, 1.5, 5.6, 1.3, 1.4, 1.4, 5.9, 5.1, 5.6, 4.2,
     5.1, 3, 4.5, 3.7, 4, 1.6, 1, 5.7, 6.1, 5.6, 5, 1.5, 4, 5, 1.5, 1.4, 1.5, 1.5,
     4, 4, 4.5, 1, 1.7, 1.5, 3.9, 1.5, 5.6, 1.4, 4.5, 5, 1.7, 4.5, 1.6, 1.5, 5.5,
     5.6, 4.7, 4.1, 1.2, 3.3, 1.3, 6.7, 4.1, 4.9, 4.9, 5, 4.4, 5, 4.2, 4.5, 1.4, 4,
     1.4, 4.7, 4.6, 4.9, 1.3, 1.3, 6, 4, 6.9, 1.5, 4.4, 5.2, 5.1, 5.6, 1.6, 1.5, 4.3,
     4.5, 3, 1.5, 1.4, 6.7, 1.2, 4.8, 4.8, 4, 1.4, 4.7, 5.9, 4.9, 5.1, 5.5, 4.6, 4.2,
     1, 5.6, 1.4, 1.5, 5.1, 5.8, 1.5, 3.9, 5.9, 4.4, 3.5, 4.3, 1.4, 1.6, 1.5, 4.5,
     5.2, 1.7, 1.1, 1.5, 5.6, 4, 5.1, 6, 5.1, 4.5, 3, 4.9, 6.7, 5, 4.4, 1.6, 4.2,
     1.4, 1.5, 4.8, 1.5, 1.2, 1.6, 1.3, 6.1), .Dim = c(150L, 4L), .Dimnames = list(
     c("44", "119", "62", "133", "142", "7", "80", "134", "83", "69", "144", "69.1",
     "102", "86", "16", "135", "37", "7.1", "50", "144.1", "134.1", "104", "97",
     "150", "99", "107", "82", "90", "44.1", "23", "145", "136", "104.1", "120",
     "4", "72", "114", "33", "48", "35", "22", "63", "63.1", "56", "23.1", "21",
     "35.1", "70", "40", "129", "7.2", "67", "120.1", "19", "85", "31", "20",
     "113", "135.1", "57", "100", "15", "58", "42", "123", "68", "122", "122.1",
     "120.2", "66", "114.1", "95", "107.1", "1", "72.1", "34", "57.1", "92", "53",
     "17", "37.1", "101", "63.2", "119.1", "16.1", "66.1", "148", "134.2", "133.1",
     "27", "20.1", "98", "52", "99.1", "49", "29", "118", "15.1", "71", "77",
     "90.1", "50.1", "74", "144.2", "73", "134.3", "138", "92.1", "62.1", "23.2",
     "141", "46", "10", "143", "109", "22.1", "83.1", "144.3", "88", "61", "98.1",
     "48.1", "47", "33.1", "56.1", "148.1", "24", "14", "22.2", "104.2", "93",
     "134.4", "101.1", "111", "79", "99.2", "124", "118.1", "147", "66.2", "47.1",
     "62.2", "2", "28", "127", "35.2", "36", "12", "37.2", "110"), c("Petal.Width",
     "Sepal.Width", "Sepal.Length", "Petal.Length"))), y = structure(c(1L, 3L,
     2L, 3L, 3L, 1L, 2L, 3L, 2L, 2L, 3L, 2L, 3L, 2L, 1L, 3L, 1L, 1L, 1L, 3L, 3L, 3L,
     2L, 3L, 2L, 3L, 2L, 2L, 1L, 1L, 3L, 3L, 3L, 3L, 1L, 2L, 3L, 1L, 1L, 1L, 1L, 2L,
     2L, 2L, 1L, 1L, 1L, 2L, 1L, 3L, 1L, 2L, 3L, 1L, 2L, 1L, 1L, 3L, 3L, 2L, 2L, 1L,
     2L, 1L, 3L, 2L, 3L, 3L, 3L, 2L, 3L, 2L, 3L, 1L, 2L, 1L, 2L, 2L, 2L, 1L, 1L, 3L,
     2L, 3L, 1L, 2L, 3L, 3L, 3L, 1L, 1L, 2L, 2L, 2L, 1L, 1L, 3L, 1L, 2L, 2L, 2L, 1L,
     2L, 3L, 2L, 3L, 3L, 2L, 2L, 1L, 3L, 1L, 1L, 3L, 3L, 1L, 2L, 3L, 2L, 2L, 2L, 1L,
     1L, 1L, 2L, 3L, 1L, 1L, 1L, 3L, 2L, 3L, 3L, 3L, 2L, 2L, 3L, 3L, 3L, 2L, 1L, 2L,
     1L, 1L, 3L, 1L, 1L, 1L, 1L, 3L), .Label = c("setosa", "versicolor", "virginica"
     ), class = "factor"), family = "multinomial"), nobs = 150L), class = c("multnet",
     "glmnet"), mlr.train.info = structure(list(factors = structure(list(), .Names = character(0)),
     ordered = structure(list(), .Names = character(0)), restore.levels = FALSE, factors.to.dummies = FALSE,
     ordered.to.int = FALSE), class = "FixDataInfo")), task.desc = structure(list(
     id = "multiclass", type = "classif", target = "Species", size = 150L, n.feat = c(numerics = 4L,
     factors = 0L, ordered = 0L, functionals = 0L), has.missings = FALSE, has.weights = FALSE,
     has.blocking = FALSE, has.coordinates = FALSE, class.levels = c("setosa", "versicolor",
     "virginica"), positive = NA_character_, negative = NA_character_, class.distribution = structure(c(setosa = 51L,
     versicolor = 49L, virginica = 50L), .Dim = 3L, .Dimnames = structure(list(c("setosa",
     "versicolor", "virginica")), .Names = ""), class = "table")), class = c("ClassifTaskDesc",
     "SupervisedTaskDesc", "TaskDesc")), subset = c(44L, 119L, 62L, 133L, 142L, 7L, 80L,
     134L, 83L, 69L, 144L, 69L, 102L, 86L, 16L, 135L, 37L, 7L, 50L, 144L, 134L, 104L,
     97L, 150L, 99L, 107L, 82L, 90L, 44L, 23L, 145L, 136L, 104L, 120L, 4L, 72L, 114L,
     33L, 48L, 35L, 22L, 63L, 63L, 56L, 23L, 21L, 35L, 70L, 40L, 129L, 7L, 67L, 120L,
     19L, 85L, 31L, 20L, 113L, 135L, 57L, 100L, 15L, 58L, 42L, 123L, 68L, 122L, 122L,
     120L, 66L, 114L, 95L, 107L, 1L, 72L, 34L, 57L, 92L, 53L, 17L, 37L, 101L, 63L, 119L,
     16L, 66L, 148L, 134L, 133L, 27L, 20L, 98L, 52L, 99L, 49L, 29L, 118L, 15L, 71L, 77L,
     90L, 50L, 74L, 144L, 73L, 134L, 138L, 92L, 62L, 23L, 141L, 46L, 10L, 143L, 109L,
     22L, 83L, 144L, 88L, 61L, 98L, 48L, 47L, 33L, 56L, 148L, 24L, 14L, 22L, 104L, 93L,
     134L, 101L, 111L, 79L, 99L, 124L, 118L, 147L, 66L, 47L, 62L, 2L, 28L, 127L, 35L,
     36L, 12L, 37L, 110L), features = c("Petal.Width", "Sepal.Width", "Sepal.Length",
     "Petal.Length"), factor.levels = list(Species = c("setosa", "versicolor", "virginica"
     )), time = 0.048, dump = NULL), class = "WrappedModel"), .newdata = structure(list(
     Petal.Width = c(0.2, 0.2, 0.2, 0.2, 0.2, 0.4, 0.3, 0.2, 0.2, 0.1, 0.2, 0.2, 0.1,
     0.1, 0.2, 0.4, 0.4, 0.3, 0.3, 0.3, 0.2, 0.4, 0.2, 0.5, 0.2, 0.2, 0.4, 0.2, 0.2,
     0.2, 0.2, 0.4, 0.1, 0.2, 0.2, 0.2, 0.2, 0.1, 0.2, 0.2, 0.3, 0.3, 0.2, 0.6, 0.4,
     0.3, 0.2, 0.2, 0.2, 0.2, 1.4, 1.5, 1.5, 1.3, 1.5, 1.3, 1.6, 1, 1.3, 1.4, 1, 1.5,
     1, 1.4, 1.3, 1.4, 1.5, 1, 1.5, 1.1, 1.8, 1.3, 1.5, 1.2, 1.3, 1.4, 1.4, 1.7, 1.5,
     1, 1.1, 1, 1.2, 1.6, 1.5, 1.6, 1.5, 1.3, 1.3, 1.3, 1.2, 1.4, 1.2, 1, 1.3, 1.2,
     1.3, 1.3, 1.1, 1.3, 2.5, 1.9, 2.1, 1.8, 2.2, 2.1, 1.7, 1.8, 1.8, 2.5, 2, 1.9,
     2.1, 2, 2.4, 2.3, 1.8, 2.2, 2.3, 1.5, 2.3, 2, 2, 1.8, 2.1, 1.8, 1.8, 1.8, 2.1,
     1.6, 1.9, 2, 2.2, 1.5, 1.4, 2.3, 2.4, 1.8, 1.8, 2.1, 2.4, 2.3, 1.9, 2.3, 2.5,
     2.3, 1.9, 2, 2.3, 1.8), Sepal.Width = c(3.5, 3, 3.2, 3.1, 3.6, 3.9, 3.4, 3.4,
     2.9, 3.1, 3.7, 3.4, 3, 3, 4, 4.4, 3.9, 3.5, 3.8, 3.8, 3.4, 3.7, 3.6, 3.3, 3.4,
     3, 3.4, 3.5, 3.4, 3.2, 3.1, 3.4, 4.1, 4.2, 3.1, 3.2, 3.5, 3.6, 3, 3.4, 3.5, 2.3,
     3.2, 3.5, 3.8, 3, 3.8, 3.2, 3.7, 3.3, 3.2, 3.2, 3.1, 2.3, 2.8, 2.8, 3.3, 2.4,
     2.9, 2.7, 2, 3, 2.2, 2.9, 2.9, 3.1, 3, 2.7, 2.2, 2.5, 3.2, 2.8, 2.5, 2.8, 2.9,
     3, 2.8, 3, 2.9, 2.6, 2.4, 2.4, 2.7, 2.7, 3, 3.4, 3.1, 2.3, 3, 2.5, 2.6, 3, 2.6,
     2.3, 2.7, 3, 2.9, 2.9, 2.5, 2.8, 3.3, 2.7, 3, 2.9, 3, 3, 2.5, 2.9, 2.5, 3.6,
     3.2, 2.7, 3, 2.5, 2.8, 3.2, 3, 3.8, 2.6, 2.2, 3.2, 2.8, 2.8, 2.7, 3.3, 3.2, 2.8,
     3, 2.8, 3, 2.8, 3.8, 2.8, 2.8, 2.6, 3, 3.4, 3.1, 3, 3.1, 3.1, 3.1, 2.7, 3.2,
     3.3, 3, 2.5, 3, 3.4, 3), Sepal.Length = c(5.1, 4.9, 4.7, 4.6, 5, 5.4, 4.6, 5,
     4.4, 4.9, 5.4, 4.8, 4.8, 4.3, 5.8, 5.7, 5.4, 5.1, 5.7, 5.1, 5.4, 5.1, 4.6, 5.1,
     4.8, 5, 5, 5.2, 5.2, 4.7, 4.8, 5.4, 5.2, 5.5, 4.9, 5, 5.5, 4.9, 4.4, 5.1, 5,
     4.5, 4.4, 5, 5.1, 4.8, 5.1, 4.6, 5.3, 5, 7, 6.4, 6.9, 5.5, 6.5, 5.7, 6.3, 4.9,
     6.6, 5.2, 5, 5.9, 6, 6.1, 5.6, 6.7, 5.6, 5.8, 6.2, 5.6, 5.9, 6.1, 6.3, 6.1, 6.4,
     6.6, 6.8, 6.7, 6, 5.7, 5.5, 5.5, 5.8, 6, 5.4, 6, 6.7, 6.3, 5.6, 5.5, 5.5, 6.1,
     5.8, 5, 5.6, 5.7, 5.7, 6.2, 5.1, 5.7, 6.3, 5.8, 7.1, 6.3, 6.5, 7.6, 4.9, 7.3,
     6.7, 7.2, 6.5, 6.4, 6.8, 5.7, 5.8, 6.4, 6.5, 7.7, 7.7, 6, 6.9, 5.6, 7.7, 6.3,
     6.7, 7.2, 6.2, 6.1, 6.4, 7.2, 7.4, 7.9, 6.4, 6.3, 6.1, 7.7, 6.3, 6.4, 6, 6.9,
     6.7, 6.9, 5.8, 6.8, 6.7, 6.7, 6.3, 6.5, 6.2, 5.9), Petal.Length = c(1.4, 1.4,
     1.3, 1.5, 1.4, 1.7, 1.4, 1.5, 1.4, 1.5, 1.5, 1.6, 1.4, 1.1, 1.2, 1.5, 1.3, 1.4,
     1.7, 1.5, 1.7, 1.5, 1, 1.7, 1.9, 1.6, 1.6, 1.5, 1.4, 1.6, 1.6, 1.5, 1.5, 1.4,
     1.5, 1.2, 1.3, 1.4, 1.3, 1.5, 1.3, 1.3, 1.3, 1.6, 1.9, 1.4, 1.6, 1.4, 1.5, 1.4,
     4.7, 4.5, 4.9, 4, 4.6, 4.5, 4.7, 3.3, 4.6, 3.9, 3.5, 4.2, 4, 4.7, 3.6, 4.4, 4.5,
     4.1, 4.5, 3.9, 4.8, 4, 4.9, 4.7, 4.3, 4.4, 4.8, 5, 4.5, 3.5, 3.8, 3.7, 3.9, 5.1,
     4.5, 4.5, 4.7, 4.4, 4.1, 4, 4.4, 4.6, 4, 3.3, 4.2, 4.2, 4.2, 4.3, 3, 4.1, 6,
     5.1, 5.9, 5.6, 5.8, 6.6, 4.5, 6.3, 5.8, 6.1, 5.1, 5.3, 5.5, 5, 5.1, 5.3, 5.5,
     6.7, 6.9, 5, 5.7, 4.9, 6.7, 4.9, 5.7, 6, 4.8, 4.9, 5.6, 5.8, 6.1, 6.4, 5.6, 5.1,
     5.6, 6.1, 5.6, 5.5, 4.8, 5.4, 5.6, 5.1, 5.1, 5.9, 5.7, 5.2, 5, 5.2, 5.4, 5.1)), row.names = c(NA,
     150L), class = "data.frame"), s = 0.01)
     35: predictLearner(.learner, .model, .newdata, ...)
     36: predictLearner.classif.glmnet(.learner, .model, .newdata, ...)
     37: predict(.model$learner.model, newx = .newdata, type = "class", ...)
     38: predict.multnet(.model$learner.model, newx = .newdata, type = "class", ...)
     39: lambda.interp(lambda, s)
     40: approx(lambda, seq(lambda), sfrac)
     41: stop("need at least two non-NA values to interpolate")
    
     ── 2. Error: MulticlassWrapper (@test_base_MulticlassWrapper.R#23) ────────────
     For learner classif.lqa please install the following packages: lqa
     1: makeLearner("classif.lqa") at testthat/test_base_MulticlassWrapper.R:23
     2: do.call(constructor, list())
     3: (function ()
     {
     makeRLearnerClassif(cl = "classif.lqa", package = "lqa", par.set = makeParamSet(makeDiscreteLearnerParam(id = "penalty",
     values = c("adaptive.lasso", "ao", "bridge", "enet", "fused.lasso", "genet",
     "icb", "lasso", "licb", "oscar", "penalreg", "ridge", "scad", "weighted.fusion")),
     makeNumericLearnerParam(id = "lambda", lower = 0, requires = quote(penalty %in%
     c("adaptive.lasso", "ao", "bridge", "genet", "lasso", "oscar", "penalreg",
     "ridge", "scad"))), makeNumericLearnerParam(id = "gamma", lower = 1 +
     .Machine$double.eps, requires = quote(penalty %in% c("ao", "bridge",
     "genet", "weighted.fusion"))), makeNumericLearnerParam(id = "alpha",
     lower = 0, upper = 1, requires = quote(penalty == "genet")), makeNumericLearnerParam(id = "oscar.c",
     lower = 0, requires = quote(penalty == "oscar")), makeNumericLearnerParam(id = "a",
     lower = 2 + .Machine$double.eps, requires = quote(penalty == "scad")),
     makeNumericLearnerParam(id = "lambda1", lower = 0, requires = quote(penalty %in%
     c("enet", "fused.lasso", "icb", "licb", "weighted.fusion"))), makeNumericLearnerParam(id = "lambda2",
     lower = 0, requires = quote(penalty %in% c("enet", "fused.lasso", "icb",
     "licb", "weighted.fusion"))), makeDiscreteLearnerParam(id = "method",
     default = "lqa.update2", values = c("lqa.update2", "ForwardBoost", "GBlockBoost")),
     makeNumericLearnerParam(id = "var.eps", default = .Machine$double.eps, lower = 0),
     makeIntegerLearnerParam(id = "max.steps", lower = 1L, default = 5000L), makeNumericLearnerParam(id = "conv.eps",
     default = 0.001, lower = 0), makeLogicalLearnerParam(id = "conv.stop",
     default = TRUE), makeNumericLearnerParam(id = "c1", default = 1e-08,
     lower = 0), makeIntegerLearnerParam(id = "digits", default = 5L, lower = 1L)),
     properties = c("numerics", "prob", "twoclass"), par.vals = list(penalty = "lasso",
     lambda = 0.1), name = "Fitting penalized Generalized Linear Models with the LQA algorithm",
     short.name = "lqa", note = "`penalty` has been set to `\"lasso\"` and `lambda` to `0.1` by default. The parameters `lambda`, `gamma`, `alpha`, `oscar.c`, `a`, `lambda1` and `lambda2` are the tuning parameters of the `penalty` function being used, and correspond to the parameters as named in the respective help files. Parameter `c` for penalty method `oscar` has been named `oscar.c`. Parameters `lambda1` and `lambda2` correspond to the parameters named 'lambda_1' and 'lambda_2' of the penalty functions `enet`, `fused.lasso`, `icb`, `licb`, as well as `weighted.fusion`.",
     callees = c("lqa", "lqa.control", "adaptive.lasso", "ao", "bridge", "enet",
     "fused.lasso", "genet", "icb", "lasso", "licb", "oscar", "penalreg",
     "ridge", "scad", "weighted.fusion"))
     })()
     4: makeRLearnerClassif(cl = "classif.lqa", package = "lqa", par.set = makeParamSet(makeDiscreteLearnerParam(id = "penalty",
     values = c("adaptive.lasso", "ao", "bridge", "enet", "fused.lasso", "genet",
     "icb", "lasso", "licb", "oscar", "penalreg", "ridge", "scad", "weighted.fusion")),
     makeNumericLearnerParam(id = "lambda", lower = 0, requires = quote(penalty %in%
     c("adaptive.lasso", "ao", "bridge", "genet", "lasso", "oscar", "penalreg",
     "ridge", "scad"))), makeNumericLearnerParam(id = "gamma", lower = 1 +
     .Machine$double.eps, requires = quote(penalty %in% c("ao", "bridge", "genet",
     "weighted.fusion"))), makeNumericLearnerParam(id = "alpha", lower = 0, upper = 1,
     requires = quote(penalty == "genet")), makeNumericLearnerParam(id = "oscar.c",
     lower = 0, requires = quote(penalty == "oscar")), makeNumericLearnerParam(id = "a",
     lower = 2 + .Machine$double.eps, requires = quote(penalty == "scad")), makeNumericLearnerParam(id = "lambda1",
     lower = 0, requires = quote(penalty %in% c("enet", "fused.lasso", "icb",
     "licb", "weighted.fusion"))), makeNumericLearnerParam(id = "lambda2",
     lower = 0, requires = quote(penalty %in% c("enet", "fused.lasso", "icb",
     "licb", "weighted.fusion"))), makeDiscreteLearnerParam(id = "method",
     default = "lqa.update2", values = c("lqa.update2", "ForwardBoost", "GBlockBoost")),
     makeNumericLearnerParam(id = "var.eps", default = .Machine$double.eps, lower = 0),
     makeIntegerLearnerParam(id = "max.steps", lower = 1L, default = 5000L), makeNumericLearnerParam(id = "conv.eps",
     default = 0.001, lower = 0), makeLogicalLearnerParam(id = "conv.stop", default = TRUE),
     makeNumericLearnerParam(id = "c1", default = 1e-08, lower = 0), makeIntegerLearnerParam(id = "digits",
     default = 5L, lower = 1L)), properties = c("numerics", "prob", "twoclass"),
     par.vals = list(penalty = "lasso", lambda = 0.1), name = "Fitting penalized Generalized Linear Models with the LQA algorithm",
     short.name = "lqa", note = "`penalty` has been set to `\"lasso\"` and `lambda` to `0.1` by default. The parameters `lambda`, `gamma`, `alpha`, `oscar.c`, `a`, `lambda1` and `lambda2` are the tuning parameters of the `penalty` function being used, and correspond to the parameters as named in the respective help files. Parameter `c` for penalty method `oscar` has been named `oscar.c`. Parameters `lambda1` and `lambda2` correspond to the parameters named 'lambda_1' and 'lambda_2' of the penalty functions `enet`, `fused.lasso`, `icb`, `licb`, as well as `weighted.fusion`.",
     callees = c("lqa", "lqa.control", "adaptive.lasso", "ao", "bridge", "enet", "fused.lasso",
     "genet", "icb", "lasso", "licb", "oscar", "penalreg", "ridge", "scad", "weighted.fusion"))
     5: addClasses(makeRLearnerInternal(cl, "classif", package, par.set, par.vals, properties,
     name, short.name, note, callees), c(cl, "RLearnerClassif"))
     6: makeRLearnerInternal(cl, "classif", package, par.set, par.vals, properties, name,
     short.name, note, callees)
     7: requirePackages(package, why = stri_paste("learner", id, sep = " "), default.method = "load")
     8: stopf("For %s please install the following packages: %s", why, ps)
    
     ── 3. Error: PreprocWrapper with glmnet (#958) (@test_base_PreprocWrapper.R#47)
     need at least two non-NA values to interpolate
     1: predict(mod, multiclass.task) at testthat/test_base_PreprocWrapper.R:47
     2: predict.WrappedModel(mod, multiclass.task)
     3: measureTime(fun1({
     p = fun2(fun3(do.call(predictLearner2, pars)))
     }))
     4: force(expr)
     5: fun1({
     p = fun2(fun3(do.call(predictLearner2, pars)))
     })
     6: evalVis(expr)
     7: withVisible(eval(expr, pf))
     8: eval(expr, pf)
     9: eval(expr, pf)
     10: fun2(fun3(do.call(predictLearner2, pars)))
     ...
     16: NextMethod(.newdata = .newdata)
     17: predictLearner.BaseWrapper(.learner, .model, .newdata, ..., s = 0.01, .newdata = structure(list(
     Sepal.Length = c(5.1, 4.9, 4.7, 4.6, 5, 5.4, 4.6, 5, 4.4, 4.9, 5.4, 4.8, 4.8,
     4.3, 5.8, 5.7, 5.4, 5.1, 5.7, 5.1, 5.4, 5.1, 4.6, 5.1, 4.8, 5, 5, 5.2, 5.2, 4.7,
     4.8, 5.4, 5.2, 5.5, 4.9, 5, 5.5, 4.9, 4.4, 5.1, 5, 4.5, 4.4, 5, 5.1, 4.8, 5.1,
     4.6, 5.3, 5, 7, 6.4, 6.9, 5.5, 6.5, 5.7, 6.3, 4.9, 6.6, 5.2, 5, 5.9, 6, 6.1,
     5.6, 6.7, 5.6, 5.8, 6.2, 5.6, 5.9, 6.1, 6.3, 6.1, 6.4, 6.6, 6.8, 6.7, 6, 5.7,
     5.5, 5.5, 5.8, 6, 5.4, 6, 6.7, 6.3, 5.6, 5.5, 5.5, 6.1, 5.8, 5, 5.6, 5.7, 5.7,
     6.2, 5.1, 5.7, 6.3, 5.8, 7.1, 6.3, 6.5, 7.6, 4.9, 7.3, 6.7, 7.2, 6.5, 6.4, 6.8,
     5.7, 5.8, 6.4, 6.5, 7.7, 7.7, 6, 6.9, 5.6, 7.7, 6.3, 6.7, 7.2, 6.2, 6.1, 6.4,
     7.2, 7.4, 7.9, 6.4, 6.3, 6.1, 7.7, 6.3, 6.4, 6, 6.9, 6.7, 6.9, 5.8, 6.8, 6.7,
     6.7, 6.3, 6.5, 6.2, 5.9), Sepal.Width = c(3.5, 3, 3.2, 3.1, 3.6, 3.9, 3.4, 3.4,
     2.9, 3.1, 3.7, 3.4, 3, 3, 4, 4.4, 3.9, 3.5, 3.8, 3.8, 3.4, 3.7, 3.6, 3.3, 3.4,
     3, 3.4, 3.5, 3.4, 3.2, 3.1, 3.4, 4.1, 4.2, 3.1, 3.2, 3.5, 3.6, 3, 3.4, 3.5, 2.3,
     3.2, 3.5, 3.8, 3, 3.8, 3.2, 3.7, 3.3, 3.2, 3.2, 3.1, 2.3, 2.8, 2.8, 3.3, 2.4,
     2.9, 2.7, 2, 3, 2.2, 2.9, 2.9, 3.1, 3, 2.7, 2.2, 2.5, 3.2, 2.8, 2.5, 2.8, 2.9,
     3, 2.8, 3, 2.9, 2.6, 2.4, 2.4, 2.7, 2.7, 3, 3.4, 3.1, 2.3, 3, 2.5, 2.6, 3, 2.6,
     2.3, 2.7, 3, 2.9, 2.9, 2.5, 2.8, 3.3, 2.7, 3, 2.9, 3, 3, 2.5, 2.9, 2.5, 3.6,
     3.2, 2.7, 3, 2.5, 2.8, 3.2, 3, 3.8, 2.6, 2.2, 3.2, 2.8, 2.8, 2.7, 3.3, 3.2, 2.8,
     3, 2.8, 3, 2.8, 3.8, 2.8, 2.8, 2.6, 3, 3.4, 3.1, 3, 3.1, 3.1, 3.1, 2.7, 3.2,
     3.3, 3, 2.5, 3, 3.4, 3), Petal.Length = c(1.4, 1.4, 1.3, 1.5, 1.4, 1.7, 1.4,
     1.5, 1.4, 1.5, 1.5, 1.6, 1.4, 1.1, 1.2, 1.5, 1.3, 1.4, 1.7, 1.5, 1.7, 1.5, 1,
     1.7, 1.9, 1.6, 1.6, 1.5, 1.4, 1.6, 1.6, 1.5, 1.5, 1.4, 1.5, 1.2, 1.3, 1.4, 1.3,
     1.5, 1.3, 1.3, 1.3, 1.6, 1.9, 1.4, 1.6, 1.4, 1.5, 1.4, 4.7, 4.5, 4.9, 4, 4.6,
     4.5, 4.7, 3.3, 4.6, 3.9, 3.5, 4.2, 4, 4.7, 3.6, 4.4, 4.5, 4.1, 4.5, 3.9, 4.8,
     4, 4.9, 4.7, 4.3, 4.4, 4.8, 5, 4.5, 3.5, 3.8, 3.7, 3.9, 5.1, 4.5, 4.5, 4.7, 4.4,
     4.1, 4, 4.4, 4.6, 4, 3.3, 4.2, 4.2, 4.2, 4.3, 3, 4.1, 6, 5.1, 5.9, 5.6, 5.8,
     6.6, 4.5, 6.3, 5.8, 6.1, 5.1, 5.3, 5.5, 5, 5.1, 5.3, 5.5, 6.7, 6.9, 5, 5.7, 4.9,
     6.7, 4.9, 5.7, 6, 4.8, 4.9, 5.6, 5.8, 6.1, 6.4, 5.6, 5.1, 5.6, 6.1, 5.6, 5.5,
     4.8, 5.4, 5.6, 5.1, 5.1, 5.9, 5.7, 5.2, 5, 5.2, 5.4, 5.1), Petal.Width = c(0.2,
     0.2, 0.2, 0.2, 0.2, 0.4, 0.3, 0.2, 0.2, 0.1, 0.2, 0.2, 0.1, 0.1, 0.2, 0.4, 0.4,
     0.3, 0.3, 0.3, 0.2, 0.4, 0.2, 0.5, 0.2, 0.2, 0.4, 0.2, 0.2, 0.2, 0.2, 0.4, 0.1,
     0.2, 0.2, 0.2, 0.2, 0.1, 0.2, 0.2, 0.3, 0.3, 0.2, 0.6, 0.4, 0.3, 0.2, 0.2, 0.2,
     0.2, 1.4, 1.5, 1.5, 1.3, 1.5, 1.3, 1.6, 1, 1.3, 1.4, 1, 1.5, 1, 1.4, 1.3, 1.4,
     1.5, 1, 1.5, 1.1, 1.8, 1.3, 1.5, 1.2, 1.3, 1.4, 1.4, 1.7, 1.5, 1, 1.1, 1, 1.2,
     1.6, 1.5, 1.6, 1.5, 1.3, 1.3, 1.3, 1.2, 1.4, 1.2, 1, 1.3, 1.2, 1.3, 1.3, 1.1,
     1.3, 2.5, 1.9, 2.1, 1.8, 2.2, 2.1, 1.7, 1.8, 1.8, 2.5, 2, 1.9, 2.1, 2, 2.4, 2.3,
     1.8, 2.2, 2.3, 1.5, 2.3, 2, 2, 1.8, 2.1, 1.8, 1.8, 1.8, 2.1, 1.6, 1.9, 2, 2.2,
     1.5, 1.4, 2.3, 2.4, 1.8, 1.8, 2.1, 2.4, 2.3, 1.9, 2.3, 2.5, 2.3, 1.9, 2, 2.3,
     1.8)), class = "data.frame", row.names = c(NA, 150L)))
     18: do.call(predictLearner, c(list(.learner = .learner$next.learner, .model = .model$learner.model$next.model,
     .newdata = .newdata), args))
     19: (function (.learner, .model, .newdata, ...)
     {
     lmod = getLearnerModel(.model)
     if (inherits(lmod, "NoFeaturesModel")) {
     predictNofeatures(.model, .newdata)
     }
     else {
     assertDataFrame(.newdata, min.rows = 1L, min.cols = 1L)
     UseMethod("predictLearner")
     }
     })(.learner = structure(list(id = "classif.glmnet", type = "classif", package = "glmnet",
     properties = c("numerics", "factors", "prob", "twoclass", "multiclass", "weights"
     ), par.set = structure(list(pars = list(alpha = structure(list(id = "alpha",
     type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = TRUE, default = 1, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), s = structure(list(id = "s", type = "numeric", len = 1L, lower = 0,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "predict"), class = c("LearnerParam", "Param")), exact = structure(list(
     id = "exact", type = "logical", len = 1L, lower = NULL, upper = NULL, values = list(
     `TRUE` = TRUE, `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = FALSE, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "predict"), class = c("LearnerParam", "Param")), nlambda = structure(list(
     id = "nlambda", type = "integer", len = 1L, lower = 1L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 100L, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), lambda.min.ratio = structure(list(id = "lambda.min.ratio", type = "numeric",
     len = 1L, lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), lambda = structure(list(id = "lambda", type = "numericvector", len = NA_integer_,
     lower = 0, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), standardize = structure(list(id = "standardize", type = "logical", len = 1L,
     lower = NULL, upper = NULL, values = list(`TRUE` = TRUE, `FALSE` = FALSE),
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = TRUE, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), intercept = structure(list(id = "intercept", type = "logical", len = 1L,
     lower = NULL, upper = NULL, values = list(`TRUE` = TRUE, `FALSE` = FALSE),
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = TRUE, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), thresh = structure(list(id = "thresh", type = "numeric", len = 1L,
     lower = 0, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-07, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), dfmax = structure(list(id = "dfmax", type = "integer", len = 1L, lower = 0L,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), pmax = structure(list(
     id = "pmax", type = "integer", len = 1L, lower = 0L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), exclude = structure(list(id = "exclude", type = "integervector", len = NA_integer_,
     lower = 1L, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), penalty.factor = structure(list(id = "penalty.factor", type = "numericvector",
     len = NA_integer_, lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), lower.limits = structure(list(id = "lower.limits", type = "numericvector",
     len = NA_integer_, lower = -Inf, upper = 0, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), upper.limits = structure(list(id = "upper.limits", type = "numericvector",
     len = NA_integer_, lower = 0, upper = Inf, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), maxit = structure(list(id = "maxit", type = "integer", len = 1L, lower = 1L,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 100000L, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), type.logistic = structure(list(
     id = "type.logistic", type = "discrete", len = 1L, lower = NULL, upper = NULL,
     values = list(Newton = "Newton", modified.Newton = "modified.Newton"), cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), type.multinomial = structure(list(id = "type.multinomial", type = "discrete",
     len = 1L, lower = NULL, upper = NULL, values = list(ungrouped = "ungrouped",
     grouped = "grouped"), cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), fdev = structure(list(
     id = "fdev", type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1e-05, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), devmax = structure(list(id = "devmax", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 0.999, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), eps = structure(list(
     id = "eps", type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1e-06, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), big = structure(list(id = "big", type = "numeric", len = 1L, lower = -Inf,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 9.9e+35, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), mnlam = structure(list(
     id = "mnlam", type = "integer", len = 1L, lower = 1, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 5, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), pmin = structure(list(id = "pmin", type = "numeric", len = 1L, lower = 0,
     upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 1e-09, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), exmx = structure(list(
     id = "exmx", type = "numeric", len = 1L, lower = -Inf, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 250, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), prec = structure(list(id = "prec", type = "numeric", len = 1L, lower = -Inf,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 1e-10, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), mxit = structure(list(
     id = "mxit", type = "integer", len = 1L, lower = 1L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 100L, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param"))), forbidden = NULL), class = c("LearnerParamSet", "ParamSet")), par.vals = list(
     s = 0.01), predict.type = "response", name = "GLM with Lasso or Elasticnet Regularization",
     short.name = "glmnet", note = "The family parameter is set to `binomial` for two-class problems and to `multinomial` otherwise.\n Factors automatically get converted to dummy columns, ordered factors to integer.\n Parameter `s` (value of the regularization parameter used for predictions) is set to `0.1` by default,\n but needs to be tuned by the user.\n glmnet uses a global control object for its parameters. mlr resets all control parameters to their defaults\n before setting the specified parameters and after training.\n If you are setting glmnet.control parameters through glmnet.control,\n you need to save and re-set them after running the glmnet learner.",
     callees = c("glmnet", "glmnet.control", "predict.glmnet"), help.list = list(s = "Argument of: glmnet::predict.glmnet\n\nValue(s) of the penalty parameter lambda at which predictions are required. Default is the entire sequence used to create the model.",
     exact = "Argument of: glmnet::predict.glmnet\n\nThis argument is relevant only when predictions are made at values of s (lambda) different from those used in the fitting of the original model. If exact=FALSE (default), then the predict function uses linear interpolation to make predictions for values of s (lambda) that do not coincide with those used in the fitting algorithm. While this is often a good approximation, it can sometimes be a bit coarse. With exact=TRUE, these different values of s are merged (and sorted) with object$lambda, and the model is refit before predictions are made. In this case, it is required to supply the original data x= and y= as additional named arguments to predict() or coef(). The workhorse predict.glmnet() needs to update the model, and so needs the data used to create it. The same is true of weights, offset, penalty.factor, lower.limits, upper.limits if these were used in the original call. Failure to do so will result in an error.",
     fdev = "Argument of: glmnet::glmnet.control\n\nminimum fractional change in deviance for stopping path; factory default = 1.0e-5",
     devmax = "Argument of: glmnet::glmnet.control\n\nmaximum fraction of explained deviance for stopping path; factory default = 0.999",
     eps = "Argument of: glmnet::glmnet.control\n\nminimum value of lambda.min.ratio (see glmnet); factory default= 1.0e-6",
     big = "Argument of: glmnet::glmnet.control\n\nlarge floating point number; factory default = 9.9e35. Inf in definition of upper.limit is set to big",
     mnlam = "Argument of: glmnet::glmnet.control\n\nminimum number of path points (lambda values) allowed; factory default = 5",
     pmin = "Argument of: glmnet::glmnet.control\n\nminimum probability for any class. factory default = 1.0e-9. Note that this implies a pmax of 1-pmin.",
     exmx = "Argument of: glmnet::glmnet.control\n\nmaximum allowed exponent. factory default = 250.0",
     prec = "Argument of: glmnet::glmnet.control\n\nconvergence threshold for multi response bounds adjustment solution. factory default = 1.0e-10",
     mxit = "Argument of: glmnet::glmnet.control\n\nmaximum iterations for multiresponse bounds adjustment solution. factory default = 100",
     alpha = "Argument of: glmnet::glmnet\n\nThe elasticnet mixing parameter, with 0≤α≤ 1. The penalty is defined as (1-α)/2||β||_2^2+α||β||_1. alpha=1 is the lasso penalty, and alpha=0 the ridge penalty.",
     nlambda = "Argument of: glmnet::glmnet\n\nThe number of lambda values - default is 100.",
     lambda.min.ratio = "Argument of: glmnet::glmnet\n\nSmallest value for lambda, as a fraction of lambda.max, the (data derived) entry value (i.e. the smallest value for which all coefficients are zero). The default depends on the sample size nobs relative to the number of variables nvars. If nobs > nvars, the default is 0.0001, close to zero. If nobs < nvars, the default is 0.01. A very small value of lambda.min.ratio will lead to a saturated fit in the nobs < nvars case. This is undefined for \"binomial\" and \"multinomial\" models, and glmnet will exit gracefully when the percentage deviance explained is almost 1.",
     lambda = "Argument of: glmnet::glmnet\n\nA user supplied lambda sequence. Typical usage is to have the program compute its own lambda sequence based on nlambda and lambda.min.ratio. Supplying a value of lambda overrides this. WARNING: use with care. Avoid supplying a single value for lambda (for predictions after CV use predict() instead). Supply instead a decreasing sequence of lambda values. glmnet relies on its warms starts for speed, and its often faster to fit a whole path than compute a single fit.",
     standardize = "Argument of: glmnet::glmnet\n\nLogical flag for x variable standardization, prior to fitting the model sequence. The coefficients are always returned on the original scale. Default is standardize=TRUE. If variables are in the same units already, you might not wish to standardize. See details below for y standardization with family=\"gaussian\".",
     intercept = "Argument of: glmnet::glmnet\n\nShould intercept(s) be fitted (default=TRUE) or set to zero (FALSE)",
     thresh = "Argument of: glmnet::glmnet\n\nConvergence threshold for coordinate descent. Each inner coordinate-descent loop continues until the maximum change in the objective after any coefficient update is less than thresh times the null deviance. Defaults value is 1E-7.",
     dfmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables in the model. Useful for very large nvars, if a partial path is desired.",
     pmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables ever to be nonzero",
     exclude = "Argument of: glmnet::glmnet\n\nIndices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor (next item).",
     penalty.factor = "Argument of: glmnet::glmnet\n\nSeparate penalty factors can be applied to each coefficient. This is a number that multiplies lambda to allow differential shrinkage. Can be 0 for some variables, which implies no shrinkage, and that variable is always included in the model. Default is 1 for all variables (and implicitly infinity for variables listed in exclude). Note: the penalty factors are internally rescaled to sum to nvars, and the lambda sequence will reflect this change.",
     lower.limits = "Argument of: glmnet::glmnet\n\nVector of lower limits for each coefficient; default -Inf. Each of these must be non-positive. Can be presented as a single value (which will then be replicated), else a vector of length nvars",
     upper.limits = "Argument of: glmnet::glmnet\n\nVector of upper limits for each coefficient; default Inf. See lower.limits",
     maxit = "Argument of: glmnet::glmnet\n\nMaximum number of passes over the data for all lambda values; default is 10^5.",
     type.logistic = "Argument of: glmnet::glmnet\n\nIf \"Newton\" then the exact hessian is used (default), while \"modified.Newton\" uses an upper-bound on the hessian, and can be faster.",
     type.multinomial = "Argument of: glmnet::glmnet\n\nIf \"grouped\" then a grouped lasso penalty is used on the multinomial coefficients for a variable. This ensures they are all in our out together. The default is \"ungrouped\""),
     config = list(), fix.factors.prediction = FALSE), class = c("classif.glmnet",
     "RLearnerClassif", "RLearner", "Learner")), .model = structure(list(learner = structure(list(
     id = "classif.glmnet", type = "classif", package = "glmnet", properties = c("numerics",
     "factors", "prob", "twoclass", "multiclass", "weights"), par.set = structure(list(
     pars = list(alpha = structure(list(id = "alpha", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), s = structure(list(id = "s", type = "numeric", len = 1L, lower = 0, upper = Inf,
     values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "predict"), class = c("LearnerParam", "Param")), exact = structure(list(
     id = "exact", type = "logical", len = 1L, lower = NULL, upper = NULL,
     values = list(`TRUE` = TRUE, `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = FALSE, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "predict"), class = c("LearnerParam", "Param"
     )), nlambda = structure(list(id = "nlambda", type = "integer", len = 1L,
     lower = 1L, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 100L, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), lambda.min.ratio = structure(list(id = "lambda.min.ratio", type = "numeric",
     len = 1L, lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), lambda = structure(list(id = "lambda", type = "numericvector", len = NA_integer_,
     lower = 0, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), standardize = structure(list(id = "standardize", type = "logical", len = 1L,
     lower = NULL, upper = NULL, values = list(`TRUE` = TRUE, `FALSE` = FALSE),
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = TRUE,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), intercept = structure(list(
     id = "intercept", type = "logical", len = 1L, lower = NULL, upper = NULL,
     values = list(`TRUE` = TRUE, `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = TRUE, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), thresh = structure(list(id = "thresh", type = "numeric", len = 1L, lower = 0,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 1e-07, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), dfmax = structure(list(
     id = "dfmax", type = "integer", len = 1L, lower = 0L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), pmax = structure(list(
     id = "pmax", type = "integer", len = 1L, lower = 0L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), exclude = structure(list(
     id = "exclude", type = "integervector", len = NA_integer_, lower = 1L,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), penalty.factor = structure(list(
     id = "penalty.factor", type = "numericvector", len = NA_integer_, lower = 0,
     upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), lower.limits = structure(list(
     id = "lower.limits", type = "numericvector", len = NA_integer_, lower = -Inf,
     upper = 0, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), upper.limits = structure(list(
     id = "upper.limits", type = "numericvector", len = NA_integer_, lower = 0,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), maxit = structure(list(
     id = "maxit", type = "integer", len = 1L, lower = 1L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 100000L,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), type.logistic = structure(list(
     id = "type.logistic", type = "discrete", len = 1L, lower = NULL, upper = NULL,
     values = list(Newton = "Newton", modified.Newton = "modified.Newton"),
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), type.multinomial = structure(list(
     id = "type.multinomial", type = "discrete", len = 1L, lower = NULL, upper = NULL,
     values = list(ungrouped = "ungrouped", grouped = "grouped"), cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), fdev = structure(list(id = "fdev", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-05, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), devmax = structure(list(id = "devmax", type = "numeric", len = 1L, lower = 0,
     upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 0.999, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), eps = structure(list(
     id = "eps", type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1e-06,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), big = structure(list(
     id = "big", type = "numeric", len = 1L, lower = -Inf, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 9.9e+35,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), mnlam = structure(list(
     id = "mnlam", type = "integer", len = 1L, lower = 1, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 5, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), pmin = structure(list(id = "pmin", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-09, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), exmx = structure(list(id = "exmx", type = "numeric", len = 1L, lower = -Inf,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 250, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), prec = structure(list(
     id = "prec", type = "numeric", len = 1L, lower = -Inf, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1e-10,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), mxit = structure(list(
     id = "mxit", type = "integer", len = 1L, lower = 1L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 100L,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param"))), forbidden = NULL), class = c("LearnerParamSet",
     "ParamSet")), par.vals = list(s = 0.01), predict.type = "response", name = "GLM with Lasso or Elasticnet Regularization",
     short.name = "glmnet", note = "The family parameter is set to `binomial` for two-class problems and to `multinomial` otherwise.\n Factors automatically get converted to dummy columns, ordered factors to integer.\n Parameter `s` (value of the regularization parameter used for predictions) is set to `0.1` by default,\n but needs to be tuned by the user.\n glmnet uses a global control object for its parameters. mlr resets all control parameters to their defaults\n before setting the specified parameters and after training.\n If you are setting glmnet.control parameters through glmnet.control,\n you need to save and re-set them after running the glmnet learner.",
     callees = c("glmnet", "glmnet.control", "predict.glmnet"), help.list = list(s = "Argument of: glmnet::predict.glmnet\n\nValue(s) of the penalty parameter lambda at which predictions are required. Default is the entire sequence used to create the model.",
     exact = "Argument of: glmnet::predict.glmnet\n\nThis argument is relevant only when predictions are made at values of s (lambda) different from those used in the fitting of the original model. If exact=FALSE (default), then the predict function uses linear interpolation to make predictions for values of s (lambda) that do not coincide with those used in the fitting algorithm. While this is often a good approximation, it can sometimes be a bit coarse. With exact=TRUE, these different values of s are merged (and sorted) with object$lambda, and the model is refit before predictions are made. In this case, it is required to supply the original data x= and y= as additional named arguments to predict() or coef(). The workhorse predict.glmnet() needs to update the model, and so needs the data used to create it. The same is true of weights, offset, penalty.factor, lower.limits, upper.limits if these were used in the original call. Failure to do so will result in an error.",
     fdev = "Argument of: glmnet::glmnet.control\n\nminimum fractional change in deviance for stopping path; factory default = 1.0e-5",
     devmax = "Argument of: glmnet::glmnet.control\n\nmaximum fraction of explained deviance for stopping path; factory default = 0.999",
     eps = "Argument of: glmnet::glmnet.control\n\nminimum value of lambda.min.ratio (see glmnet); factory default= 1.0e-6",
     big = "Argument of: glmnet::glmnet.control\n\nlarge floating point number; factory default = 9.9e35. Inf in definition of upper.limit is set to big",
     mnlam = "Argument of: glmnet::glmnet.control\n\nminimum number of path points (lambda values) allowed; factory default = 5",
     pmin = "Argument of: glmnet::glmnet.control\n\nminimum probability for any class. factory default = 1.0e-9. Note that this implies a pmax of 1-pmin.",
     exmx = "Argument of: glmnet::glmnet.control\n\nmaximum allowed exponent. factory default = 250.0",
     prec = "Argument of: glmnet::glmnet.control\n\nconvergence threshold for multi response bounds adjustment solution. factory default = 1.0e-10",
     mxit = "Argument of: glmnet::glmnet.control\n\nmaximum iterations for multiresponse bounds adjustment solution. factory default = 100",
     alpha = "Argument of: glmnet::glmnet\n\nThe elasticnet mixing parameter, with 0≤α≤ 1. The penalty is defined as (1-α)/2||β||_2^2+α||β||_1. alpha=1 is the lasso penalty, and alpha=0 the ridge penalty.",
     nlambda = "Argument of: glmnet::glmnet\n\nThe number of lambda values - default is 100.",
     lambda.min.ratio = "Argument of: glmnet::glmnet\n\nSmallest value for lambda, as a fraction of lambda.max, the (data derived) entry value (i.e. the smallest value for which all coefficients are zero). The default depends on the sample size nobs relative to the number of variables nvars. If nobs > nvars, the default is 0.0001, close to zero. If nobs < nvars, the default is 0.01. A very small value of lambda.min.ratio will lead to a saturated fit in the nobs < nvars case. This is undefined for \"binomial\" and \"multinomial\" models, and glmnet will exit gracefully when the percentage deviance explained is almost 1.",
     lambda = "Argument of: glmnet::glmnet\n\nA user supplied lambda sequence. Typical usage is to have the program compute its own lambda sequence based on nlambda and lambda.min.ratio. Supplying a value of lambda overrides this. WARNING: use with care. Avoid supplying a single value for lambda (for predictions after CV use predict() instead). Supply instead a decreasing sequence of lambda values. glmnet relies on its warms starts for speed, and its often faster to fit a whole path than compute a single fit.",
     standardize = "Argument of: glmnet::glmnet\n\nLogical flag for x variable standardization, prior to fitting the model sequence. The coefficients are always returned on the original scale. Default is standardize=TRUE. If variables are in the same units already, you might not wish to standardize. See details below for y standardization with family=\"gaussian\".",
     intercept = "Argument of: glmnet::glmnet\n\nShould intercept(s) be fitted (default=TRUE) or set to zero (FALSE)",
     thresh = "Argument of: glmnet::glmnet\n\nConvergence threshold for coordinate descent. Each inner coordinate-descent loop continues until the maximum change in the objective after any coefficient update is less than thresh times the null deviance. Defaults value is 1E-7.",
     dfmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables in the model. Useful for very large nvars, if a partial path is desired.",
     pmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables ever to be nonzero",
     exclude = "Argument of: glmnet::glmnet\n\nIndices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor (next item).",
     penalty.factor = "Argument of: glmnet::glmnet\n\nSeparate penalty factors can be applied to each coefficient. This is a number that multiplies lambda to allow differential shrinkage. Can be 0 for some variables, which implies no shrinkage, and that variable is always included in the model. Default is 1 for all variables (and implicitly infinity for variables listed in exclude). Note: the penalty factors are internally rescaled to sum to nvars, and the lambda sequence will reflect this change.",
     lower.limits = "Argument of: glmnet::glmnet\n\nVector of lower limits for each coefficient; default -Inf. Each of these must be non-positive. Can be presented as a single value (which will then be replicated), else a vector of length nvars",
     upper.limits = "Argument of: glmnet::glmnet\n\nVector of upper limits for each coefficient; default Inf. See lower.limits",
     maxit = "Argument of: glmnet::glmnet\n\nMaximum number of passes over the data for all lambda values; default is 10^5.",
     type.logistic = "Argument of: glmnet::glmnet\n\nIf \"Newton\" then the exact hessian is used (default), while \"modified.Newton\" uses an upper-bound on the hessian, and can be faster.",
     type.multinomial = "Argument of: glmnet::glmnet\n\nIf \"grouped\" then a grouped lasso penalty is used on the multinomial coefficients for a variable. This ensures they are all in our out together. The default is \"ungrouped\""),
     config = list(), fix.factors.prediction = FALSE), class = c("classif.glmnet",
     "RLearnerClassif", "RLearner", "Learner")), learner.model = structure(list(a0 = structure(c(0,
     0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(3L, 5L), .Dimnames = list(c("setosa",
     "versicolor", "virginica"), c("s0", "s1", "s2", "s3", "s4"))), beta = list(setosa = new("dgCMatrix",
     i = c(0L, 0L, 0L, 0L, 0L), p = 0:5, Dim = 4:5, Dimnames = list(c("Sepal.Length",
     "Sepal.Width", "Petal.Length", "Petal.Width"), c("s0", "s1", "s2", "s3", "s4"
     )), x = c(0, 0, 0, 0, 0), factors = list()), versicolor = new("dgCMatrix", i = c(0L,
     0L, 0L, 0L, 0L), p = 0:5, Dim = 4:5, Dimnames = list(c("Sepal.Length", "Sepal.Width",
     "Petal.Length", "Petal.Width"), c("s0", "s1", "s2", "s3", "s4")), x = c(0, 0, 0,
     0, 0), factors = list()), virginica = new("dgCMatrix", i = c(0L, 0L, 0L, 0L, 0L),
     p = 0:5, Dim = 4:5, Dimnames = list(c("Sepal.Length", "Sepal.Width", "Petal.Length",
     "Petal.Width"), c("s0", "s1", "s2", "s3", "s4")), x = c(0, 0, 0, 0, 0), factors = list())),
     dfmat = structure(c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(3L,
     5L), .Dimnames = list(c("setosa", "versicolor", "virginica"), c("s0", "s1", "s2",
     "s3", "s4"))), df = function (x, df1, df2, ncp, log = FALSE)
     {
     if (missing(ncp))
     .Call(C_df, x, df1, df2, log)
     else .Call(C_dnf, x, df1, df2, ncp, log)
     }, dim = 4:5, lambda = c(NaN, NaN, NaN, NaN, NaN), dev.ratio = c(-1.47835125379153e-15,
     -1.47835125379153e-15, -1.47835125379153e-15, -1.47835125379153e-15, -1.47835125379153e-15
     ), nulldev = 329.583686600433, npasses = 18L, jerr = 0L, offset = FALSE, classnames = c("setosa",
     "versicolor", "virginica"), grouped = FALSE, call = (function (x, y, family = c("gaussian",
     "binomial", "poisson", "multinomial", "cox", "mgaussian"), weights, offset = NULL,
     alpha = 1, nlambda = 100, lambda.min.ratio = ifelse(nobs < nvars, 0.01, 1e-04),
     lambda = NULL, standardize = TRUE, intercept = TRUE, thresh = 1e-07, dfmax = nvars +
     1, pmax = min(dfmax * 2 + 20, nvars), exclude, penalty.factor = rep(1,
     nvars), lower.limits = -Inf, upper.limits = Inf, maxit = 1e+05, type.gaussian = ifelse(nvars <
     500, "covariance", "naive"), type.logistic = c("Newton", "modified.Newton"),
     standardize.response = FALSE, type.multinomial = c("ungrouped", "grouped"))
     {
     family = match.arg(family)
     if (alpha > 1) {
     warning("alpha >1; set to 1")
     alpha = 1
     }
     if (alpha < 0) {
     warning("alpha<0; set to 0")
     alpha = 0
     }
     alpha = as.double(alpha)
     this.call = match.call()
     nlam = as.integer(nlambda)
     y = drop(y)
     np = dim(x)
     if (is.null(np) | (np[2] <= 1))
     stop("x should be a matrix with 2 or more columns")
     nobs = as.integer(np[1])
     if (missing(weights))
     weights = rep(1, nobs)
     else if (length(weights) != nobs)
     stop(paste("number of elements in weights (", length(weights), ") not equal to the number of rows of x (",
     nobs, ")", sep = ""))
     nvars = as.integer(np[2])
     dimy = dim(y)
     nrowy = ifelse(is.null(dimy), length(y), dimy[1])
     if (nrowy != nobs)
     stop(paste("number of observations in y (", nrowy, ") not equal to the number of rows of x (",
     nobs, ")", sep = ""))
     vnames = colnames(x)
     if (is.null(vnames))
     vnames = paste("V", seq(nvars), sep = "")
     ne = as.integer(dfmax)
     nx = as.integer(pmax)
     if (missing(exclude))
     exclude = integer(0)
     if (any(penalty.factor == Inf)) {
     exclude = c(exclude, seq(nvars)[penalty.factor == Inf])
     exclude = sort(unique(exclude))
     }
     if (length(exclude) > 0) {
     jd = match(exclude, seq(nvars), 0)
     if (!all(jd > 0))
     stop("Some excluded variables out of range")
     penalty.factor[jd] = 1
     jd = as.integer(c(length(jd), jd))
     }
     else jd = as.integer(0)
     vp = as.double(penalty.factor)
     internal.parms = glmnet.control()
     if (any(lower.limits > 0)) {
     stop("Lower limits should be non-positive")
     }
     if (any(upper.limits < 0)) {
     stop("Upper limits should be non-negative")
     }
     lower.limits[lower.limits == -Inf] = -internal.parms$big
     upper.limits[upper.limits == Inf] = internal.parms$big
     if (length(lower.limits) < nvars) {
     if (length(lower.limits) == 1)
     lower.limits = rep(lower.limits, nvars)
     else stop("Require length 1 or nvars lower.limits")
     }
     else lower.limits = lower.limits[seq(nvars)]
     if (length(upper.limits) < nvars) {
     if (length(upper.limits) == 1)
     upper.limits = rep(upper.limits, nvars)
     else stop("Require length 1 or nvars upper.limits")
     }
     else upper.limits = upper.limits[seq(nvars)]
     cl = rbind(lower.limits, upper.limits)
     if (any(cl == 0)) {
     fdev = glmnet.control()$fdev
     if (fdev != 0) {
     glmnet.control(fdev = 0)
     on.exit(glmnet.control(fdev = fdev))
     }
     }
     storage.mode(cl) = "double"
     isd = as.integer(standardize)
     intr = as.integer(intercept)
     if (!missing(intercept) && family == "cox")
     warning("Cox model has no intercept")
     jsd = as.integer(standardize.response)
     thresh = as.double(thresh)
     if (is.null(lambda)) {
     if (lambda.min.ratio >= 1)
     stop("lambda.min.ratio should be less than 1")
     flmin = as.double(lambda.min.ratio)
     ulam = double(1)
     }
     else {
     flmin = as.double(1)
     if (any(lambda < 0))
     stop("lambdas should be non-negative")
     ulam = as.double(rev(sort(lambda)))
     nlam = as.integer(length(lambda))
     }
     is.sparse = FALSE
     ix = jx = NULL
     if (inherits(x, "sparseMatrix")) {
     is.sparse = TRUE
     x = as(x, "CsparseMatrix")
     x = as(x, "dgCMatrix")
     ix = as.integer(x@p + 1)
     jx = as.integer(x@i + 1)
     x = as.double(x@x)
     }
     kopt = switch(match.arg(type.logistic), Newton = 0, modified.Newton = 1)
     if (family == "multinomial") {
     type.multinomial = match.arg(type.multinomial)
     if (type.multinomial == "grouped")
     kopt = 2
     }
     kopt = as.integer(kopt)
     fit = switch(family, gaussian = elnet(x, is.sparse, ix, jx, y, weights, offset,
     type.gaussian, alpha, nobs, nvars, jd, vp, cl, ne, nx, nlam, flmin, ulam,
     thresh, isd, intr, vnames, maxit), poisson = fishnet(x, is.sparse, ix,
     jx, y, weights, offset, alpha, nobs, nvars, jd, vp, cl, ne, nx, nlam,
     flmin, ulam, thresh, isd, intr, vnames, maxit), binomial = lognet(x,
     is.sparse, ix, jx, y, weights, offset, alpha, nobs, nvars, jd, vp, cl,
     ne, nx, nlam, flmin, ulam, thresh, isd, intr, vnames, maxit, kopt, family),
     multinomial = lognet(x, is.sparse, ix, jx, y, weights, offset, alpha,
     nobs, nvars, jd, vp, cl, ne, nx, nlam, flmin, ulam, thresh, isd,
     intr, vnames, maxit, kopt, family), cox = coxnet(x, is.sparse, ix,
     jx, y, weights, offset, alpha, nobs, nvars, jd, vp, cl, ne, nx, nlam,
     flmin, ulam, thresh, isd, vnames, maxit), mgaussian = mrelnet(x,
     is.sparse, ix, jx, y, weights, offset, alpha, nobs, nvars, jd, vp,
     cl, ne, nx, nlam, flmin, ulam, thresh, isd, jsd, intr, vnames, maxit))
     if (is.null(lambda))
     fit$lambda = fix.lam(fit$lambda)
     fit$call = this.call
     fit$nobs = nobs
     class(fit) = c(class(fit), "glmnet")
     fit
     })(x = structure(c(5.1, 4.9, 4.7, 4.6, 5, 5.4, 4.6, 5, 4.4, 4.9, 5.4, 4.8, 4.8,
     4.3, 5.8, 5.7, 5.4, 5.1, 5.7, 5.1, 5.4, 5.1, 4.6, 5.1, 4.8, 5, 5, 5.2, 5.2, 4.7,
     4.8, 5.4, 5.2, 5.5, 4.9, 5, 5.5, 4.9, 4.4, 5.1, 5, 4.5, 4.4, 5, 5.1, 4.8, 5.1,
     4.6, 5.3, 5, 7, 6.4, 6.9, 5.5, 6.5, 5.7, 6.3, 4.9, 6.6, 5.2, 5, 5.9, 6, 6.1,
     5.6, 6.7, 5.6, 5.8, 6.2, 5.6, 5.9, 6.1, 6.3, 6.1, 6.4, 6.6, 6.8, 6.7, 6, 5.7,
     5.5, 5.5, 5.8, 6, 5.4, 6, 6.7, 6.3, 5.6, 5.5, 5.5, 6.1, 5.8, 5, 5.6, 5.7, 5.7,
     6.2, 5.1, 5.7, 6.3, 5.8, 7.1, 6.3, 6.5, 7.6, 4.9, 7.3, 6.7, 7.2, 6.5, 6.4, 6.8,
     5.7, 5.8, 6.4, 6.5, 7.7, 7.7, 6, 6.9, 5.6, 7.7, 6.3, 6.7, 7.2, 6.2, 6.1, 6.4,
     7.2, 7.4, 7.9, 6.4, 6.3, 6.1, 7.7, 6.3, 6.4, 6, 6.9, 6.7, 6.9, 5.8, 6.8, 6.7,
     6.7, 6.3, 6.5, 6.2, 5.9, 3.5, 3, 3.2, 3.1, 3.6, 3.9, 3.4, 3.4, 2.9, 3.1, 3.7,
     3.4, 3, 3, 4, 4.4, 3.9, 3.5, 3.8, 3.8, 3.4, 3.7, 3.6, 3.3, 3.4, 3, 3.4, 3.5,
     3.4, 3.2, 3.1, 3.4, 4.1, 4.2, 3.1, 3.2, 3.5, 3.6, 3, 3.4, 3.5, 2.3, 3.2, 3.5,
     3.8, 3, 3.8, 3.2, 3.7, 3.3, 3.2, 3.2, 3.1, 2.3, 2.8, 2.8, 3.3, 2.4, 2.9, 2.7,
     2, 3, 2.2, 2.9, 2.9, 3.1, 3, 2.7, 2.2, 2.5, 3.2, 2.8, 2.5, 2.8, 2.9, 3, 2.8,
     3, 2.9, 2.6, 2.4, 2.4, 2.7, 2.7, 3, 3.4, 3.1, 2.3, 3, 2.5, 2.6, 3, 2.6, 2.3,
     2.7, 3, 2.9, 2.9, 2.5, 2.8, 3.3, 2.7, 3, 2.9, 3, 3, 2.5, 2.9, 2.5, 3.6, 3.2,
     2.7, 3, 2.5, 2.8, 3.2, 3, 3.8, 2.6, 2.2, 3.2, 2.8, 2.8, 2.7, 3.3, 3.2, 2.8, 3,
     2.8, 3, 2.8, 3.8, 2.8, 2.8, 2.6, 3, 3.4, 3.1, 3, 3.1, 3.1, 3.1, 2.7, 3.2, 3.3,
     3, 2.5, 3, 3.4, 3, 1.4, 1.4, 1.3, 1.5, 1.4, 1.7, 1.4, 1.5, 1.4, 1.5, 1.5, 1.6,
     1.4, 1.1, 1.2, 1.5, 1.3, 1.4, 1.7, 1.5, 1.7, 1.5, 1, 1.7, 1.9, 1.6, 1.6, 1.5,
     1.4, 1.6, 1.6, 1.5, 1.5, 1.4, 1.5, 1.2, 1.3, 1.4, 1.3, 1.5, 1.3, 1.3, 1.3, 1.6,
     1.9, 1.4, 1.6, 1.4, 1.5, 1.4, 4.7, 4.5, 4.9, 4, 4.6, 4.5, 4.7, 3.3, 4.6, 3.9,
     3.5, 4.2, 4, 4.7, 3.6, 4.4, 4.5, 4.1, 4.5, 3.9, 4.8, 4, 4.9, 4.7, 4.3, 4.4, 4.8,
     5, 4.5, 3.5, 3.8, 3.7, 3.9, 5.1, 4.5, 4.5, 4.7, 4.4, 4.1, 4, 4.4, 4.6, 4, 3.3,
     4.2, 4.2, 4.2, 4.3, 3, 4.1, 6, 5.1, 5.9, 5.6, 5.8, 6.6, 4.5, 6.3, 5.8, 6.1, 5.1,
     5.3, 5.5, 5, 5.1, 5.3, 5.5, 6.7, 6.9, 5, 5.7, 4.9, 6.7, 4.9, 5.7, 6, 4.8, 4.9,
     5.6, 5.8, 6.1, 6.4, 5.6, 5.1, 5.6, 6.1, 5.6, 5.5, 4.8, 5.4, 5.6, 5.1, 5.1, 5.9,
     5.7, 5.2, 5, 5.2, 5.4, 5.1, 0.2, 0.2, 0.2, 0.2, 0.2, 0.4, 0.3, 0.2, 0.2, 0.1,
     0.2, 0.2, 0.1, 0.1, 0.2, 0.4, 0.4, 0.3, 0.3, 0.3, 0.2, 0.4, 0.2, 0.5, 0.2, 0.2,
     0.4, 0.2, 0.2, 0.2, 0.2, 0.4, 0.1, 0.2, 0.2, 0.2, 0.2, 0.1, 0.2, 0.2, 0.3, 0.3,
     0.2, 0.6, 0.4, 0.3, 0.2, 0.2, 0.2, 0.2, 1.4, 1.5, 1.5, 1.3, 1.5, 1.3, 1.6, 1,
     1.3, 1.4, 1, 1.5, 1, 1.4, 1.3, 1.4, 1.5, 1, 1.5, 1.1, 1.8, 1.3, 1.5, 1.2, 1.3,
     1.4, 1.4, 1.7, 1.5, 1, 1.1, 1, 1.2, 1.6, 1.5, 1.6, 1.5, 1.3, 1.3, 1.3, 1.2, 1.4,
     1.2, 1, 1.3, 1.2, 1.3, 1.3, 1.1, 1.3, 2.5, 1.9, 2.1, 1.8, 2.2, 2.1, 1.7, 1.8,
     1.8, 2.5, 2, 1.9, 2.1, 2, 2.4, 2.3, 1.8, 2.2, 2.3, 1.5, 2.3, 2, 2, 1.8, 2.1,
     1.8, 1.8, 1.8, 2.1, 1.6, 1.9, 2, 2.2, 1.5, 1.4, 2.3, 2.4, 1.8, 1.8, 2.1, 2.4,
     2.3, 1.9, 2.3, 2.5, 2.3, 1.9, 2, 2.3, 1.8), .Dim = c(150L, 4L), .Dimnames = list(
     NULL, c("Sepal.Length", "Sepal.Width", "Petal.Length", "Petal.Width"))),
     y = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
     1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
     1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L,
     2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L,
     2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L,
     2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L,
     3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L,
     3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L,
     3L, 3L), .Label = c("setosa", "versicolor", "virginica"), class = "factor"),
     family = "multinomial"), nobs = 150L), class = c("multnet", "glmnet"), mlr.train.info = structure(list(
     factors = structure(list(), .Names = character(0)), ordered = structure(list(), .Names = character(0)),
     restore.levels = FALSE, factors.to.dummies = FALSE, ordered.to.int = FALSE), class = "FixDataInfo")),
     task.desc = structure(list(id = "multiclass", type = "classif", target = "Species",
     size = 150L, n.feat = c(numerics = 4L, factors = 0L, ordered = 0L, functionals = 0L
     ), has.missings = FALSE, has.weights = FALSE, has.blocking = FALSE, has.coordinates = FALSE,
     class.levels = c("setosa", "versicolor", "virginica"), positive = NA_character_,
     negative = NA_character_, class.distribution = structure(c(setosa = 50L,
     versicolor = 50L, virginica = 50L), .Dim = 3L, .Dimnames = structure(list(
     c("setosa", "versicolor", "virginica")), .Names = ""), class = "table")), class = c("ClassifTaskDesc",
     "SupervisedTaskDesc", "TaskDesc")), subset = 1:150, features = c("Sepal.Length",
     "Sepal.Width", "Petal.Length", "Petal.Width"), factor.levels = list(Species = c("setosa",
     "versicolor", "virginica")), time = 0.0169999999999959, dump = NULL), class = "WrappedModel"),
     .newdata = structure(list(Sepal.Length = c(5.1, 4.9, 4.7, 4.6, 5, 5.4, 4.6, 5,
     4.4, 4.9, 5.4, 4.8, 4.8, 4.3, 5.8, 5.7, 5.4, 5.1, 5.7, 5.1, 5.4, 5.1, 4.6, 5.1,
     4.8, 5, 5, 5.2, 5.2, 4.7, 4.8, 5.4, 5.2, 5.5, 4.9, 5, 5.5, 4.9, 4.4, 5.1, 5,
     4.5, 4.4, 5, 5.1, 4.8, 5.1, 4.6, 5.3, 5, 7, 6.4, 6.9, 5.5, 6.5, 5.7, 6.3, 4.9,
     6.6, 5.2, 5, 5.9, 6, 6.1, 5.6, 6.7, 5.6, 5.8, 6.2, 5.6, 5.9, 6.1, 6.3, 6.1, 6.4,
     6.6, 6.8, 6.7, 6, 5.7, 5.5, 5.5, 5.8, 6, 5.4, 6, 6.7, 6.3, 5.6, 5.5, 5.5, 6.1,
     5.8, 5, 5.6, 5.7, 5.7, 6.2, 5.1, 5.7, 6.3, 5.8, 7.1, 6.3, 6.5, 7.6, 4.9, 7.3,
     6.7, 7.2, 6.5, 6.4, 6.8, 5.7, 5.8, 6.4, 6.5, 7.7, 7.7, 6, 6.9, 5.6, 7.7, 6.3,
     6.7, 7.2, 6.2, 6.1, 6.4, 7.2, 7.4, 7.9, 6.4, 6.3, 6.1, 7.7, 6.3, 6.4, 6, 6.9,
     6.7, 6.9, 5.8, 6.8, 6.7, 6.7, 6.3, 6.5, 6.2, 5.9), Sepal.Width = c(3.5, 3, 3.2,
     3.1, 3.6, 3.9, 3.4, 3.4, 2.9, 3.1, 3.7, 3.4, 3, 3, 4, 4.4, 3.9, 3.5, 3.8, 3.8,
     3.4, 3.7, 3.6, 3.3, 3.4, 3, 3.4, 3.5, 3.4, 3.2, 3.1, 3.4, 4.1, 4.2, 3.1, 3.2,
     3.5, 3.6, 3, 3.4, 3.5, 2.3, 3.2, 3.5, 3.8, 3, 3.8, 3.2, 3.7, 3.3, 3.2, 3.2, 3.1,
     2.3, 2.8, 2.8, 3.3, 2.4, 2.9, 2.7, 2, 3, 2.2, 2.9, 2.9, 3.1, 3, 2.7, 2.2, 2.5,
     3.2, 2.8, 2.5, 2.8, 2.9, 3, 2.8, 3, 2.9, 2.6, 2.4, 2.4, 2.7, 2.7, 3, 3.4, 3.1,
     2.3, 3, 2.5, 2.6, 3, 2.6, 2.3, 2.7, 3, 2.9, 2.9, 2.5, 2.8, 3.3, 2.7, 3, 2.9,
     3, 3, 2.5, 2.9, 2.5, 3.6, 3.2, 2.7, 3, 2.5, 2.8, 3.2, 3, 3.8, 2.6, 2.2, 3.2,
     2.8, 2.8, 2.7, 3.3, 3.2, 2.8, 3, 2.8, 3, 2.8, 3.8, 2.8, 2.8, 2.6, 3, 3.4, 3.1,
     3, 3.1, 3.1, 3.1, 2.7, 3.2, 3.3, 3, 2.5, 3, 3.4, 3), Petal.Length = c(1.4, 1.4,
     1.3, 1.5, 1.4, 1.7, 1.4, 1.5, 1.4, 1.5, 1.5, 1.6, 1.4, 1.1, 1.2, 1.5, 1.3, 1.4,
     1.7, 1.5, 1.7, 1.5, 1, 1.7, 1.9, 1.6, 1.6, 1.5, 1.4, 1.6, 1.6, 1.5, 1.5, 1.4,
     1.5, 1.2, 1.3, 1.4, 1.3, 1.5, 1.3, 1.3, 1.3, 1.6, 1.9, 1.4, 1.6, 1.4, 1.5, 1.4,
     4.7, 4.5, 4.9, 4, 4.6, 4.5, 4.7, 3.3, 4.6, 3.9, 3.5, 4.2, 4, 4.7, 3.6, 4.4, 4.5,
     4.1, 4.5, 3.9, 4.8, 4, 4.9, 4.7, 4.3, 4.4, 4.8, 5, 4.5, 3.5, 3.8, 3.7, 3.9, 5.1,
     4.5, 4.5, 4.7, 4.4, 4.1, 4, 4.4, 4.6, 4, 3.3, 4.2, 4.2, 4.2, 4.3, 3, 4.1, 6,
     5.1, 5.9, 5.6, 5.8, 6.6, 4.5, 6.3, 5.8, 6.1, 5.1, 5.3, 5.5, 5, 5.1, 5.3, 5.5,
     6.7, 6.9, 5, 5.7, 4.9, 6.7, 4.9, 5.7, 6, 4.8, 4.9, 5.6, 5.8, 6.1, 6.4, 5.6, 5.1,
     5.6, 6.1, 5.6, 5.5, 4.8, 5.4, 5.6, 5.1, 5.1, 5.9, 5.7, 5.2, 5, 5.2, 5.4, 5.1),
     Petal.Width = c(0.2, 0.2, 0.2, 0.2, 0.2, 0.4, 0.3, 0.2, 0.2, 0.1, 0.2, 0.2,
     0.1, 0.1, 0.2, 0.4, 0.4, 0.3, 0.3, 0.3, 0.2, 0.4, 0.2, 0.5, 0.2, 0.2, 0.4,
     0.2, 0.2, 0.2, 0.2, 0.4, 0.1, 0.2, 0.2, 0.2, 0.2, 0.1, 0.2, 0.2, 0.3, 0.3,
     0.2, 0.6, 0.4, 0.3, 0.2, 0.2, 0.2, 0.2, 1.4, 1.5, 1.5, 1.3, 1.5, 1.3, 1.6,
     1, 1.3, 1.4, 1, 1.5, 1, 1.4, 1.3, 1.4, 1.5, 1, 1.5, 1.1, 1.8, 1.3, 1.5, 1.2,
     1.3, 1.4, 1.4, 1.7, 1.5, 1, 1.1, 1, 1.2, 1.6, 1.5, 1.6, 1.5, 1.3, 1.3, 1.3,
     1.2, 1.4, 1.2, 1, 1.3, 1.2, 1.3, 1.3, 1.1, 1.3, 2.5, 1.9, 2.1, 1.8, 2.2,
     2.1, 1.7, 1.8, 1.8, 2.5, 2, 1.9, 2.1, 2, 2.4, 2.3, 1.8, 2.2, 2.3, 1.5, 2.3,
     2, 2, 1.8, 2.1, 1.8, 1.8, 1.8, 2.1, 1.6, 1.9, 2, 2.2, 1.5, 1.4, 2.3, 2.4,
     1.8, 1.8, 2.1, 2.4, 2.3, 1.9, 2.3, 2.5, 2.3, 1.9, 2, 2.3, 1.8)), class = "data.frame", row.names = c(NA,
     150L)), `NA` = NULL, s = 0.01)
     20: predictLearner.classif.glmnet(.learner = structure(list(id = "classif.glmnet", type = "classif",
     package = "glmnet", properties = c("numerics", "factors", "prob", "twoclass",
     "multiclass", "weights"), par.set = structure(list(pars = list(alpha = structure(list(
     id = "alpha", type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), s = structure(list(id = "s", type = "numeric", len = 1L, lower = 0,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "predict"), class = c("LearnerParam", "Param")), exact = structure(list(
     id = "exact", type = "logical", len = 1L, lower = NULL, upper = NULL, values = list(
     `TRUE` = TRUE, `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = FALSE, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "predict"), class = c("LearnerParam", "Param")), nlambda = structure(list(
     id = "nlambda", type = "integer", len = 1L, lower = 1L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 100L, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), lambda.min.ratio = structure(list(id = "lambda.min.ratio", type = "numeric",
     len = 1L, lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), lambda = structure(list(id = "lambda", type = "numericvector", len = NA_integer_,
     lower = 0, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), standardize = structure(list(id = "standardize", type = "logical", len = 1L,
     lower = NULL, upper = NULL, values = list(`TRUE` = TRUE, `FALSE` = FALSE),
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = TRUE, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), intercept = structure(list(id = "intercept", type = "logical", len = 1L,
     lower = NULL, upper = NULL, values = list(`TRUE` = TRUE, `FALSE` = FALSE),
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = TRUE, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), thresh = structure(list(id = "thresh", type = "numeric", len = 1L,
     lower = 0, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-07, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), dfmax = structure(list(id = "dfmax", type = "integer", len = 1L, lower = 0L,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), pmax = structure(list(
     id = "pmax", type = "integer", len = 1L, lower = 0L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), exclude = structure(list(id = "exclude", type = "integervector", len = NA_integer_,
     lower = 1L, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), penalty.factor = structure(list(id = "penalty.factor", type = "numericvector",
     len = NA_integer_, lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), lower.limits = structure(list(id = "lower.limits", type = "numericvector",
     len = NA_integer_, lower = -Inf, upper = 0, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), upper.limits = structure(list(id = "upper.limits", type = "numericvector",
     len = NA_integer_, lower = 0, upper = Inf, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), maxit = structure(list(id = "maxit", type = "integer", len = 1L, lower = 1L,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 100000L, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), type.logistic = structure(list(
     id = "type.logistic", type = "discrete", len = 1L, lower = NULL, upper = NULL,
     values = list(Newton = "Newton", modified.Newton = "modified.Newton"), cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), type.multinomial = structure(list(id = "type.multinomial", type = "discrete",
     len = 1L, lower = NULL, upper = NULL, values = list(ungrouped = "ungrouped",
     grouped = "grouped"), cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), fdev = structure(list(
     id = "fdev", type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1e-05, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), devmax = structure(list(id = "devmax", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 0.999, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), eps = structure(list(
     id = "eps", type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1e-06, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), big = structure(list(id = "big", type = "numeric", len = 1L, lower = -Inf,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 9.9e+35, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), mnlam = structure(list(
     id = "mnlam", type = "integer", len = 1L, lower = 1, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 5, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), pmin = structure(list(id = "pmin", type = "numeric", len = 1L, lower = 0,
     upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 1e-09, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), exmx = structure(list(
     id = "exmx", type = "numeric", len = 1L, lower = -Inf, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 250, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), prec = structure(list(id = "prec", type = "numeric", len = 1L, lower = -Inf,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 1e-10, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), mxit = structure(list(
     id = "mxit", type = "integer", len = 1L, lower = 1L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 100L, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param"))), forbidden = NULL), class = c("LearnerParamSet", "ParamSet")), par.vals = list(
     s = 0.01), predict.type = "response", name = "GLM with Lasso or Elasticnet Regularization",
     short.name = "glmnet", note = "The family parameter is set to `binomial` for two-class problems and to `multinomial` otherwise.\n Factors automatically get converted to dummy columns, ordered factors to integer.\n Parameter `s` (value of the regularization parameter used for predictions) is set to `0.1` by default,\n but needs to be tuned by the user.\n glmnet uses a global control object for its parameters. mlr resets all control parameters to their defaults\n before setting the specified parameters and after training.\n If you are setting glmnet.control parameters through glmnet.control,\n you need to save and re-set them after running the glmnet learner.",
     callees = c("glmnet", "glmnet.control", "predict.glmnet"), help.list = list(s = "Argument of: glmnet::predict.glmnet\n\nValue(s) of the penalty parameter lambda at which predictions are required. Default is the entire sequence used to create the model.",
     exact = "Argument of: glmnet::predict.glmnet\n\nThis argument is relevant only when predictions are made at values of s (lambda) different from those used in the fitting of the original model. If exact=FALSE (default), then the predict function uses linear interpolation to make predictions for values of s (lambda) that do not coincide with those used in the fitting algorithm. While this is often a good approximation, it can sometimes be a bit coarse. With exact=TRUE, these different values of s are merged (and sorted) with object$lambda, and the model is refit before predictions are made. In this case, it is required to supply the original data x= and y= as additional named arguments to predict() or coef(). The workhorse predict.glmnet() needs to update the model, and so needs the data used to create it. The same is true of weights, offset, penalty.factor, lower.limits, upper.limits if these were used in the original call. Failure to do so will result in an error.",
     fdev = "Argument of: glmnet::glmnet.control\n\nminimum fractional change in deviance for stopping path; factory default = 1.0e-5",
     devmax = "Argument of: glmnet::glmnet.control\n\nmaximum fraction of explained deviance for stopping path; factory default = 0.999",
     eps = "Argument of: glmnet::glmnet.control\n\nminimum value of lambda.min.ratio (see glmnet); factory default= 1.0e-6",
     big = "Argument of: glmnet::glmnet.control\n\nlarge floating point number; factory default = 9.9e35. Inf in definition of upper.limit is set to big",
     mnlam = "Argument of: glmnet::glmnet.control\n\nminimum number of path points (lambda values) allowed; factory default = 5",
     pmin = "Argument of: glmnet::glmnet.control\n\nminimum probability for any class. factory default = 1.0e-9. Note that this implies a pmax of 1-pmin.",
     exmx = "Argument of: glmnet::glmnet.control\n\nmaximum allowed exponent. factory default = 250.0",
     prec = "Argument of: glmnet::glmnet.control\n\nconvergence threshold for multi response bounds adjustment solution. factory default = 1.0e-10",
     mxit = "Argument of: glmnet::glmnet.control\n\nmaximum iterations for multiresponse bounds adjustment solution. factory default = 100",
     alpha = "Argument of: glmnet::glmnet\n\nThe elasticnet mixing parameter, with 0≤α≤ 1. The penalty is defined as (1-α)/2||β||_2^2+α||β||_1. alpha=1 is the lasso penalty, and alpha=0 the ridge penalty.",
     nlambda = "Argument of: glmnet::glmnet\n\nThe number of lambda values - default is 100.",
     lambda.min.ratio = "Argument of: glmnet::glmnet\n\nSmallest value for lambda, as a fraction of lambda.max, the (data derived) entry value (i.e. the smallest value for which all coefficients are zero). The default depends on the sample size nobs relative to the number of variables nvars. If nobs > nvars, the default is 0.0001, close to zero. If nobs < nvars, the default is 0.01. A very small value of lambda.min.ratio will lead to a saturated fit in the nobs < nvars case. This is undefined for \"binomial\" and \"multinomial\" models, and glmnet will exit gracefully when the percentage deviance explained is almost 1.",
     lambda = "Argument of: glmnet::glmnet\n\nA user supplied lambda sequence. Typical usage is to have the program compute its own lambda sequence based on nlambda and lambda.min.ratio. Supplying a value of lambda overrides this. WARNING: use with care. Avoid supplying a single value for lambda (for predictions after CV use predict() instead). Supply instead a decreasing sequence of lambda values. glmnet relies on its warms starts for speed, and its often faster to fit a whole path than compute a single fit.",
     standardize = "Argument of: glmnet::glmnet\n\nLogical flag for x variable standardization, prior to fitting the model sequence. The coefficients are always returned on the original scale. Default is standardize=TRUE. If variables are in the same units already, you might not wish to standardize. See details below for y standardization with family=\"gaussian\".",
     intercept = "Argument of: glmnet::glmnet\n\nShould intercept(s) be fitted (default=TRUE) or set to zero (FALSE)",
     thresh = "Argument of: glmnet::glmnet\n\nConvergence threshold for coordinate descent. Each inner coordinate-descent loop continues until the maximum change in the objective after any coefficient update is less than thresh times the null deviance. Defaults value is 1E-7.",
     dfmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables in the model. Useful for very large nvars, if a partial path is desired.",
     pmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables ever to be nonzero",
     exclude = "Argument of: glmnet::glmnet\n\nIndices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor (next item).",
     penalty.factor = "Argument of: glmnet::glmnet\n\nSeparate penalty factors can be applied to each coefficient. This is a number that multiplies lambda to allow differential shrinkage. Can be 0 for some variables, which implies no shrinkage, and that variable is always included in the model. Default is 1 for all variables (and implicitly infinity for variables listed in exclude). Note: the penalty factors are internally rescaled to sum to nvars, and the lambda sequence will reflect this change.",
     lower.limits = "Argument of: glmnet::glmnet\n\nVector of lower limits for each coefficient; default -Inf. Each of these must be non-positive. Can be presented as a single value (which will then be replicated), else a vector of length nvars",
     upper.limits = "Argument of: glmnet::glmnet\n\nVector of upper limits for each coefficient; default Inf. See lower.limits",
     maxit = "Argument of: glmnet::glmnet\n\nMaximum number of passes over the data for all lambda values; default is 10^5.",
     type.logistic = "Argument of: glmnet::glmnet\n\nIf \"Newton\" then the exact hessian is used (default), while \"modified.Newton\" uses an upper-bound on the hessian, and can be faster.",
     type.multinomial = "Argument of: glmnet::glmnet\n\nIf \"grouped\" then a grouped lasso penalty is used on the multinomial coefficients for a variable. This ensures they are all in our out together. The default is \"ungrouped\""),
     config = list(), fix.factors.prediction = FALSE), class = c("classif.glmnet",
     "RLearnerClassif", "RLearner", "Learner")), .model = structure(list(learner = structure(list(
     id = "classif.glmnet", type = "classif", package = "glmnet", properties = c("numerics",
     "factors", "prob", "twoclass", "multiclass", "weights"), par.set = structure(list(
     pars = list(alpha = structure(list(id = "alpha", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), s = structure(list(id = "s", type = "numeric", len = 1L, lower = 0, upper = Inf,
     values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "predict"), class = c("LearnerParam", "Param")), exact = structure(list(
     id = "exact", type = "logical", len = 1L, lower = NULL, upper = NULL,
     values = list(`TRUE` = TRUE, `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = FALSE, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "predict"), class = c("LearnerParam", "Param"
     )), nlambda = structure(list(id = "nlambda", type = "integer", len = 1L,
     lower = 1L, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 100L, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), lambda.min.ratio = structure(list(id = "lambda.min.ratio", type = "numeric",
     len = 1L, lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), lambda = structure(list(id = "lambda", type = "numericvector", len = NA_integer_,
     lower = 0, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), standardize = structure(list(id = "standardize", type = "logical", len = 1L,
     lower = NULL, upper = NULL, values = list(`TRUE` = TRUE, `FALSE` = FALSE),
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = TRUE,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), intercept = structure(list(
     id = "intercept", type = "logical", len = 1L, lower = NULL, upper = NULL,
     values = list(`TRUE` = TRUE, `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = TRUE, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), thresh = structure(list(id = "thresh", type = "numeric", len = 1L, lower = 0,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 1e-07, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), dfmax = structure(list(
     id = "dfmax", type = "integer", len = 1L, lower = 0L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), pmax = structure(list(
     id = "pmax", type = "integer", len = 1L, lower = 0L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), exclude = structure(list(
     id = "exclude", type = "integervector", len = NA_integer_, lower = 1L,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), penalty.factor = structure(list(
     id = "penalty.factor", type = "numericvector", len = NA_integer_, lower = 0,
     upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), lower.limits = structure(list(
     id = "lower.limits", type = "numericvector", len = NA_integer_, lower = -Inf,
     upper = 0, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), upper.limits = structure(list(
     id = "upper.limits", type = "numericvector", len = NA_integer_, lower = 0,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), maxit = structure(list(
     id = "maxit", type = "integer", len = 1L, lower = 1L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 100000L,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), type.logistic = structure(list(
     id = "type.logistic", type = "discrete", len = 1L, lower = NULL, upper = NULL,
     values = list(Newton = "Newton", modified.Newton = "modified.Newton"),
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), type.multinomial = structure(list(
     id = "type.multinomial", type = "discrete", len = 1L, lower = NULL, upper = NULL,
     values = list(ungrouped = "ungrouped", grouped = "grouped"), cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), fdev = structure(list(id = "fdev", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-05, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), devmax = structure(list(id = "devmax", type = "numeric", len = 1L, lower = 0,
     upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 0.999, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), eps = structure(list(
     id = "eps", type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1e-06,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), big = structure(list(
     id = "big", type = "numeric", len = 1L, lower = -Inf, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 9.9e+35,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), mnlam = structure(list(
     id = "mnlam", type = "integer", len = 1L, lower = 1, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 5, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), pmin = structure(list(id = "pmin", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-09, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), exmx = structure(list(id = "exmx", type = "numeric", len = 1L, lower = -Inf,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 250, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), prec = structure(list(
     id = "prec", type = "numeric", len = 1L, lower = -Inf, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1e-10,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), mxit = structure(list(
     id = "mxit", type = "integer", len = 1L, lower = 1L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 100L,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param"))), forbidden = NULL), class = c("LearnerParamSet",
     "ParamSet")), par.vals = list(s = 0.01), predict.type = "response", name = "GLM with Lasso or Elasticnet Regularization",
     short.name = "glmnet", note = "The family parameter is set to `binomial` for two-class problems and to `multinomial` otherwise.\n Factors automatically get converted to dummy columns, ordered factors to integer.\n Parameter `s` (value of the regularization parameter used for predictions) is set to `0.1` by default,\n but needs to be tuned by the user.\n glmnet uses a global control object for its parameters. mlr resets all control parameters to their defaults\n before setting the specified parameters and after training.\n If you are setting glmnet.control parameters through glmnet.control,\n you need to save and re-set them after running the glmnet learner.",
     callees = c("glmnet", "glmnet.control", "predict.glmnet"), help.list = list(s = "Argument of: glmnet::predict.glmnet\n\nValue(s) of the penalty parameter lambda at which predictions are required. Default is the entire sequence used to create the model.",
     exact = "Argument of: glmnet::predict.glmnet\n\nThis argument is relevant only when predictions are made at values of s (lambda) different from those used in the fitting of the original model. If exact=FALSE (default), then the predict function uses linear interpolation to make predictions for values of s (lambda) that do not coincide with those used in the fitting algorithm. While this is often a good approximation, it can sometimes be a bit coarse. With exact=TRUE, these different values of s are merged (and sorted) with object$lambda, and the model is refit before predictions are made. In this case, it is required to supply the original data x= and y= as additional named arguments to predict() or coef(). The workhorse predict.glmnet() needs to update the model, and so needs the data used to create it. The same is true of weights, offset, penalty.factor, lower.limits, upper.limits if these were used in the original call. Failure to do so will result in an error.",
     fdev = "Argument of: glmnet::glmnet.control\n\nminimum fractional change in deviance for stopping path; factory default = 1.0e-5",
     devmax = "Argument of: glmnet::glmnet.control\n\nmaximum fraction of explained deviance for stopping path; factory default = 0.999",
     eps = "Argument of: glmnet::glmnet.control\n\nminimum value of lambda.min.ratio (see glmnet); factory default= 1.0e-6",
     big = "Argument of: glmnet::glmnet.control\n\nlarge floating point number; factory default = 9.9e35. Inf in definition of upper.limit is set to big",
     mnlam = "Argument of: glmnet::glmnet.control\n\nminimum number of path points (lambda values) allowed; factory default = 5",
     pmin = "Argument of: glmnet::glmnet.control\n\nminimum probability for any class. factory default = 1.0e-9. Note that this implies a pmax of 1-pmin.",
     exmx = "Argument of: glmnet::glmnet.control\n\nmaximum allowed exponent. factory default = 250.0",
     prec = "Argument of: glmnet::glmnet.control\n\nconvergence threshold for multi response bounds adjustment solution. factory default = 1.0e-10",
     mxit = "Argument of: glmnet::glmnet.control\n\nmaximum iterations for multiresponse bounds adjustment solution. factory default = 100",
     alpha = "Argument of: glmnet::glmnet\n\nThe elasticnet mixing parameter, with 0≤α≤ 1. The penalty is defined as (1-α)/2||β||_2^2+α||β||_1. alpha=1 is the lasso penalty, and alpha=0 the ridge penalty.",
     nlambda = "Argument of: glmnet::glmnet\n\nThe number of lambda values - default is 100.",
     lambda.min.ratio = "Argument of: glmnet::glmnet\n\nSmallest value for lambda, as a fraction of lambda.max, the (data derived) entry value (i.e. the smallest value for which all coefficients are zero). The default depends on the sample size nobs relative to the number of variables nvars. If nobs > nvars, the default is 0.0001, close to zero. If nobs < nvars, the default is 0.01. A very small value of lambda.min.ratio will lead to a saturated fit in the nobs < nvars case. This is undefined for \"binomial\" and \"multinomial\" models, and glmnet will exit gracefully when the percentage deviance explained is almost 1.",
     lambda = "Argument of: glmnet::glmnet\n\nA user supplied lambda sequence. Typical usage is to have the program compute its own lambda sequence based on nlambda and lambda.min.ratio. Supplying a value of lambda overrides this. WARNING: use with care. Avoid supplying a single value for lambda (for predictions after CV use predict() instead). Supply instead a decreasing sequence of lambda values. glmnet relies on its warms starts for speed, and its often faster to fit a whole path than compute a single fit.",
     standardize = "Argument of: glmnet::glmnet\n\nLogical flag for x variable standardization, prior to fitting the model sequence. The coefficients are always returned on the original scale. Default is standardize=TRUE. If variables are in the same units already, you might not wish to standardize. See details below for y standardization with family=\"gaussian\".",
     intercept = "Argument of: glmnet::glmnet\n\nShould intercept(s) be fitted (default=TRUE) or set to zero (FALSE)",
     thresh = "Argument of: glmnet::glmnet\n\nConvergence threshold for coordinate descent. Each inner coordinate-descent loop continues until the maximum change in the objective after any coefficient update is less than thresh times the null deviance. Defaults value is 1E-7.",
     dfmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables in the model. Useful for very large nvars, if a partial path is desired.",
     pmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables ever to be nonzero",
     exclude = "Argument of: glmnet::glmnet\n\nIndices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor (next item).",
     penalty.factor = "Argument of: glmnet::glmnet\n\nSeparate penalty factors can be applied to each coefficient. This is a number that multiplies lambda to allow differential shrinkage. Can be 0 for some variables, which implies no shrinkage, and that variable is always included in the model. Default is 1 for all variables (and implicitly infinity for variables listed in exclude). Note: the penalty factors are internally rescaled to sum to nvars, and the lambda sequence will reflect this change.",
     lower.limits = "Argument of: glmnet::glmnet\n\nVector of lower limits for each coefficient; default -Inf. Each of these must be non-positive. Can be presented as a single value (which will then be replicated), else a vector of length nvars",
     upper.limits = "Argument of: glmnet::glmnet\n\nVector of upper limits for each coefficient; default Inf. See lower.limits",
     maxit = "Argument of: glmnet::glmnet\n\nMaximum number of passes over the data for all lambda values; default is 10^5.",
     type.logistic = "Argument of: glmnet::glmnet\n\nIf \"Newton\" then the exact hessian is used (default), while \"modified.Newton\" uses an upper-bound on the hessian, and can be faster.",
     type.multinomial = "Argument of: glmnet::glmnet\n\nIf \"grouped\" then a grouped lasso penalty is used on the multinomial coefficients for a variable. This ensures they are all in our out together. The default is \"ungrouped\""),
     config = list(), fix.factors.prediction = FALSE), class = c("classif.glmnet",
     "RLearnerClassif", "RLearner", "Learner")), learner.model = structure(list(a0 = structure(c(0,
     0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(3L, 5L), .Dimnames = list(c("setosa",
     "versicolor", "virginica"), c("s0", "s1", "s2", "s3", "s4"))), beta = list(setosa = new("dgCMatrix",
     i = c(0L, 0L, 0L, 0L, 0L), p = 0:5, Dim = 4:5, Dimnames = list(c("Sepal.Length",
     "Sepal.Width", "Petal.Length", "Petal.Width"), c("s0", "s1", "s2", "s3", "s4"
     )), x = c(0, 0, 0, 0, 0), factors = list()), versicolor = new("dgCMatrix", i = c(0L,
     0L, 0L, 0L, 0L), p = 0:5, Dim = 4:5, Dimnames = list(c("Sepal.Length", "Sepal.Width",
     "Petal.Length", "Petal.Width"), c("s0", "s1", "s2", "s3", "s4")), x = c(0, 0, 0,
     0, 0), factors = list()), virginica = new("dgCMatrix", i = c(0L, 0L, 0L, 0L, 0L),
     p = 0:5, Dim = 4:5, Dimnames = list(c("Sepal.Length", "Sepal.Width", "Petal.Length",
     "Petal.Width"), c("s0", "s1", "s2", "s3", "s4")), x = c(0, 0, 0, 0, 0), factors = list())),
     dfmat = structure(c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(3L,
     5L), .Dimnames = list(c("setosa", "versicolor", "virginica"), c("s0", "s1", "s2",
     "s3", "s4"))), df = function (x, df1, df2, ncp, log = FALSE)
     {
     if (missing(ncp))
     .Call(C_df, x, df1, df2, log)
     else .Call(C_dnf, x, df1, df2, ncp, log)
     }, dim = 4:5, lambda = c(NaN, NaN, NaN, NaN, NaN), dev.ratio = c(-1.47835125379153e-15,
     -1.47835125379153e-15, -1.47835125379153e-15, -1.47835125379153e-15, -1.47835125379153e-15
     ), nulldev = 329.583686600433, npasses = 18L, jerr = 0L, offset = FALSE, classnames = c("setosa",
     "versicolor", "virginica"), grouped = FALSE, call = (function (x, y, family = c("gaussian",
     "binomial", "poisson", "multinomial", "cox", "mgaussian"), weights, offset = NULL,
     alpha = 1, nlambda = 100, lambda.min.ratio = ifelse(nobs < nvars, 0.01, 1e-04),
     lambda = NULL, standardize = TRUE, intercept = TRUE, thresh = 1e-07, dfmax = nvars +
     1, pmax = min(dfmax * 2 + 20, nvars), exclude, penalty.factor = rep(1,
     nvars), lower.limits = -Inf, upper.limits = Inf, maxit = 1e+05, type.gaussian = ifelse(nvars <
     500, "covariance", "naive"), type.logistic = c("Newton", "modified.Newton"),
     standardize.response = FALSE, type.multinomial = c("ungrouped", "grouped"))
     {
     family = match.arg(family)
     if (alpha > 1) {
     warning("alpha >1; set to 1")
     alpha = 1
     }
     if (alpha < 0) {
     warning("alpha<0; set to 0")
     alpha = 0
     }
     alpha = as.double(alpha)
     this.call = match.call()
     nlam = as.integer(nlambda)
     y = drop(y)
     np = dim(x)
     if (is.null(np) | (np[2] <= 1))
     stop("x should be a matrix with 2 or more columns")
     nobs = as.integer(np[1])
     if (missing(weights))
     weights = rep(1, nobs)
     else if (length(weights) != nobs)
     stop(paste("number of elements in weights (", length(weights), ") not equal to the number of rows of x (",
     nobs, ")", sep = ""))
     nvars = as.integer(np[2])
     dimy = dim(y)
     nrowy = ifelse(is.null(dimy), length(y), dimy[1])
     if (nrowy != nobs)
     stop(paste("number of observations in y (", nrowy, ") not equal to the number of rows of x (",
     nobs, ")", sep = ""))
     vnames = colnames(x)
     if (is.null(vnames))
     vnames = paste("V", seq(nvars), sep = "")
     ne = as.integer(dfmax)
     nx = as.integer(pmax)
     if (missing(exclude))
     exclude = integer(0)
     if (any(penalty.factor == Inf)) {
     exclude = c(exclude, seq(nvars)[penalty.factor == Inf])
     exclude = sort(unique(exclude))
     }
     if (length(exclude) > 0) {
     jd = match(exclude, seq(nvars), 0)
     if (!all(jd > 0))
     stop("Some excluded variables out of range")
     penalty.factor[jd] = 1
     jd = as.integer(c(length(jd), jd))
     }
     else jd = as.integer(0)
     vp = as.double(penalty.factor)
     internal.parms = glmnet.control()
     if (any(lower.limits > 0)) {
     stop("Lower limits should be non-positive")
     }
     if (any(upper.limits < 0)) {
     stop("Upper limits should be non-negative")
     }
     lower.limits[lower.limits == -Inf] = -internal.parms$big
     upper.limits[upper.limits == Inf] = internal.parms$big
     if (length(lower.limits) < nvars) {
     if (length(lower.limits) == 1)
     lower.limits = rep(lower.limits, nvars)
     else stop("Require length 1 or nvars lower.limits")
     }
     else lower.limits = lower.limits[seq(nvars)]
     if (length(upper.limits) < nvars) {
     if (length(upper.limits) == 1)
     upper.limits = rep(upper.limits, nvars)
     else stop("Require length 1 or nvars upper.limits")
     }
     else upper.limits = upper.limits[seq(nvars)]
     cl = rbind(lower.limits, upper.limits)
     if (any(cl == 0)) {
     fdev = glmnet.control()$fdev
     if (fdev != 0) {
     glmnet.control(fdev = 0)
     on.exit(glmnet.control(fdev = fdev))
     }
     }
     storage.mode(cl) = "double"
     isd = as.integer(standardize)
     intr = as.integer(intercept)
     if (!missing(intercept) && family == "cox")
     warning("Cox model has no intercept")
     jsd = as.integer(standardize.response)
     thresh = as.double(thresh)
     if (is.null(lambda)) {
     if (lambda.min.ratio >= 1)
     stop("lambda.min.ratio should be less than 1")
     flmin = as.double(lambda.min.ratio)
     ulam = double(1)
     }
     else {
     flmin = as.double(1)
     if (any(lambda < 0))
     stop("lambdas should be non-negative")
     ulam = as.double(rev(sort(lambda)))
     nlam = as.integer(length(lambda))
     }
     is.sparse = FALSE
     ix = jx = NULL
     if (inherits(x, "sparseMatrix")) {
     is.sparse = TRUE
     x = as(x, "CsparseMatrix")
     x = as(x, "dgCMatrix")
     ix = as.integer(x@p + 1)
     jx = as.integer(x@i + 1)
     x = as.double(x@x)
     }
     kopt = switch(match.arg(type.logistic), Newton = 0, modified.Newton = 1)
     if (family == "multinomial") {
     type.multinomial = match.arg(type.multinomial)
     if (type.multinomial == "grouped")
     kopt = 2
     }
     kopt = as.integer(kopt)
     fit = switch(family, gaussian = elnet(x, is.sparse, ix, jx, y, weights, offset,
     type.gaussian, alpha, nobs, nvars, jd, vp, cl, ne, nx, nlam, flmin, ulam,
     thresh, isd, intr, vnames, maxit), poisson = fishnet(x, is.sparse, ix,
     jx, y, weights, offset, alpha, nobs, nvars, jd, vp, cl, ne, nx, nlam,
     flmin, ulam, thresh, isd, intr, vnames, maxit), binomial = lognet(x,
     is.sparse, ix, jx, y, weights, offset, alpha, nobs, nvars, jd, vp, cl,
     ne, nx, nlam, flmin, ulam, thresh, isd, intr, vnames, maxit, kopt, family),
     multinomial = lognet(x, is.sparse, ix, jx, y, weights, offset, alpha,
     nobs, nvars, jd, vp, cl, ne, nx, nlam, flmin, ulam, thresh, isd,
     intr, vnames, maxit, kopt, family), cox = coxnet(x, is.sparse, ix,
     jx, y, weights, offset, alpha, nobs, nvars, jd, vp, cl, ne, nx, nlam,
     flmin, ulam, thresh, isd, vnames, maxit), mgaussian = mrelnet(x,
     is.sparse, ix, jx, y, weights, offset, alpha, nobs, nvars, jd, vp,
     cl, ne, nx, nlam, flmin, ulam, thresh, isd, jsd, intr, vnames, maxit))
     if (is.null(lambda))
     fit$lambda = fix.lam(fit$lambda)
     fit$call = this.call
     fit$nobs = nobs
     class(fit) = c(class(fit), "glmnet")
     fit
     })(x = structure(c(5.1, 4.9, 4.7, 4.6, 5, 5.4, 4.6, 5, 4.4, 4.9, 5.4, 4.8, 4.8,
     4.3, 5.8, 5.7, 5.4, 5.1, 5.7, 5.1, 5.4, 5.1, 4.6, 5.1, 4.8, 5, 5, 5.2, 5.2, 4.7,
     4.8, 5.4, 5.2, 5.5, 4.9, 5, 5.5, 4.9, 4.4, 5.1, 5, 4.5, 4.4, 5, 5.1, 4.8, 5.1,
     4.6, 5.3, 5, 7, 6.4, 6.9, 5.5, 6.5, 5.7, 6.3, 4.9, 6.6, 5.2, 5, 5.9, 6, 6.1,
     5.6, 6.7, 5.6, 5.8, 6.2, 5.6, 5.9, 6.1, 6.3, 6.1, 6.4, 6.6, 6.8, 6.7, 6, 5.7,
     5.5, 5.5, 5.8, 6, 5.4, 6, 6.7, 6.3, 5.6, 5.5, 5.5, 6.1, 5.8, 5, 5.6, 5.7, 5.7,
     6.2, 5.1, 5.7, 6.3, 5.8, 7.1, 6.3, 6.5, 7.6, 4.9, 7.3, 6.7, 7.2, 6.5, 6.4, 6.8,
     5.7, 5.8, 6.4, 6.5, 7.7, 7.7, 6, 6.9, 5.6, 7.7, 6.3, 6.7, 7.2, 6.2, 6.1, 6.4,
     7.2, 7.4, 7.9, 6.4, 6.3, 6.1, 7.7, 6.3, 6.4, 6, 6.9, 6.7, 6.9, 5.8, 6.8, 6.7,
     6.7, 6.3, 6.5, 6.2, 5.9, 3.5, 3, 3.2, 3.1, 3.6, 3.9, 3.4, 3.4, 2.9, 3.1, 3.7,
     3.4, 3, 3, 4, 4.4, 3.9, 3.5, 3.8, 3.8, 3.4, 3.7, 3.6, 3.3, 3.4, 3, 3.4, 3.5,
     3.4, 3.2, 3.1, 3.4, 4.1, 4.2, 3.1, 3.2, 3.5, 3.6, 3, 3.4, 3.5, 2.3, 3.2, 3.5,
     3.8, 3, 3.8, 3.2, 3.7, 3.3, 3.2, 3.2, 3.1, 2.3, 2.8, 2.8, 3.3, 2.4, 2.9, 2.7,
     2, 3, 2.2, 2.9, 2.9, 3.1, 3, 2.7, 2.2, 2.5, 3.2, 2.8, 2.5, 2.8, 2.9, 3, 2.8,
     3, 2.9, 2.6, 2.4, 2.4, 2.7, 2.7, 3, 3.4, 3.1, 2.3, 3, 2.5, 2.6, 3, 2.6, 2.3,
     2.7, 3, 2.9, 2.9, 2.5, 2.8, 3.3, 2.7, 3, 2.9, 3, 3, 2.5, 2.9, 2.5, 3.6, 3.2,
     2.7, 3, 2.5, 2.8, 3.2, 3, 3.8, 2.6, 2.2, 3.2, 2.8, 2.8, 2.7, 3.3, 3.2, 2.8, 3,
     2.8, 3, 2.8, 3.8, 2.8, 2.8, 2.6, 3, 3.4, 3.1, 3, 3.1, 3.1, 3.1, 2.7, 3.2, 3.3,
     3, 2.5, 3, 3.4, 3, 1.4, 1.4, 1.3, 1.5, 1.4, 1.7, 1.4, 1.5, 1.4, 1.5, 1.5, 1.6,
     1.4, 1.1, 1.2, 1.5, 1.3, 1.4, 1.7, 1.5, 1.7, 1.5, 1, 1.7, 1.9, 1.6, 1.6, 1.5,
     1.4, 1.6, 1.6, 1.5, 1.5, 1.4, 1.5, 1.2, 1.3, 1.4, 1.3, 1.5, 1.3, 1.3, 1.3, 1.6,
     1.9, 1.4, 1.6, 1.4, 1.5, 1.4, 4.7, 4.5, 4.9, 4, 4.6, 4.5, 4.7, 3.3, 4.6, 3.9,
     3.5, 4.2, 4, 4.7, 3.6, 4.4, 4.5, 4.1, 4.5, 3.9, 4.8, 4, 4.9, 4.7, 4.3, 4.4, 4.8,
     5, 4.5, 3.5, 3.8, 3.7, 3.9, 5.1, 4.5, 4.5, 4.7, 4.4, 4.1, 4, 4.4, 4.6, 4, 3.3,
     4.2, 4.2, 4.2, 4.3, 3, 4.1, 6, 5.1, 5.9, 5.6, 5.8, 6.6, 4.5, 6.3, 5.8, 6.1, 5.1,
     5.3, 5.5, 5, 5.1, 5.3, 5.5, 6.7, 6.9, 5, 5.7, 4.9, 6.7, 4.9, 5.7, 6, 4.8, 4.9,
     5.6, 5.8, 6.1, 6.4, 5.6, 5.1, 5.6, 6.1, 5.6, 5.5, 4.8, 5.4, 5.6, 5.1, 5.1, 5.9,
     5.7, 5.2, 5, 5.2, 5.4, 5.1, 0.2, 0.2, 0.2, 0.2, 0.2, 0.4, 0.3, 0.2, 0.2, 0.1,
     0.2, 0.2, 0.1, 0.1, 0.2, 0.4, 0.4, 0.3, 0.3, 0.3, 0.2, 0.4, 0.2, 0.5, 0.2, 0.2,
     0.4, 0.2, 0.2, 0.2, 0.2, 0.4, 0.1, 0.2, 0.2, 0.2, 0.2, 0.1, 0.2, 0.2, 0.3, 0.3,
     0.2, 0.6, 0.4, 0.3, 0.2, 0.2, 0.2, 0.2, 1.4, 1.5, 1.5, 1.3, 1.5, 1.3, 1.6, 1,
     1.3, 1.4, 1, 1.5, 1, 1.4, 1.3, 1.4, 1.5, 1, 1.5, 1.1, 1.8, 1.3, 1.5, 1.2, 1.3,
     1.4, 1.4, 1.7, 1.5, 1, 1.1, 1, 1.2, 1.6, 1.5, 1.6, 1.5, 1.3, 1.3, 1.3, 1.2, 1.4,
     1.2, 1, 1.3, 1.2, 1.3, 1.3, 1.1, 1.3, 2.5, 1.9, 2.1, 1.8, 2.2, 2.1, 1.7, 1.8,
     1.8, 2.5, 2, 1.9, 2.1, 2, 2.4, 2.3, 1.8, 2.2, 2.3, 1.5, 2.3, 2, 2, 1.8, 2.1,
     1.8, 1.8, 1.8, 2.1, 1.6, 1.9, 2, 2.2, 1.5, 1.4, 2.3, 2.4, 1.8, 1.8, 2.1, 2.4,
     2.3, 1.9, 2.3, 2.5, 2.3, 1.9, 2, 2.3, 1.8), .Dim = c(150L, 4L), .Dimnames = list(
     NULL, c("Sepal.Length", "Sepal.Width", "Petal.Length", "Petal.Width"))),
     y = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
     1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
     1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L,
     2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L,
     2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L,
     2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L,
     3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L,
     3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L,
     3L, 3L), .Label = c("setosa", "versicolor", "virginica"), class = "factor"),
     family = "multinomial"), nobs = 150L), class = c("multnet", "glmnet"), mlr.train.info = structure(list(
     factors = structure(list(), .Names = character(0)), ordered = structure(list(), .Names = character(0)),
     restore.levels = FALSE, factors.to.dummies = FALSE, ordered.to.int = FALSE), class = "FixDataInfo")),
     task.desc = structure(list(id = "multiclass", type = "classif", target = "Species",
     size = 150L, n.feat = c(numerics = 4L, factors = 0L, ordered = 0L, functionals = 0L
     ), has.missings = FALSE, has.weights = FALSE, has.blocking = FALSE, has.coordinates = FALSE,
     class.levels = c("setosa", "versicolor", "virginica"), positive = NA_character_,
     negative = NA_character_, class.distribution = structure(c(setosa = 50L,
     versicolor = 50L, virginica = 50L), .Dim = 3L, .Dimnames = structure(list(
     c("setosa", "versicolor", "virginica")), .Names = ""), class = "table")), class = c("ClassifTaskDesc",
     "SupervisedTaskDesc", "TaskDesc")), subset = 1:150, features = c("Sepal.Length",
     "Sepal.Width", "Petal.Length", "Petal.Width"), factor.levels = list(Species = c("setosa",
     "versicolor", "virginica")), time = 0.0169999999999959, dump = NULL), class = "WrappedModel"),
     .newdata = structure(list(Sepal.Length = c(5.1, 4.9, 4.7, 4.6, 5, 5.4, 4.6, 5,
     4.4, 4.9, 5.4, 4.8, 4.8, 4.3, 5.8, 5.7, 5.4, 5.1, 5.7, 5.1, 5.4, 5.1, 4.6, 5.1,
     4.8, 5, 5, 5.2, 5.2, 4.7, 4.8, 5.4, 5.2, 5.5, 4.9, 5, 5.5, 4.9, 4.4, 5.1, 5,
     4.5, 4.4, 5, 5.1, 4.8, 5.1, 4.6, 5.3, 5, 7, 6.4, 6.9, 5.5, 6.5, 5.7, 6.3, 4.9,
     6.6, 5.2, 5, 5.9, 6, 6.1, 5.6, 6.7, 5.6, 5.8, 6.2, 5.6, 5.9, 6.1, 6.3, 6.1, 6.4,
     6.6, 6.8, 6.7, 6, 5.7, 5.5, 5.5, 5.8, 6, 5.4, 6, 6.7, 6.3, 5.6, 5.5, 5.5, 6.1,
     5.8, 5, 5.6, 5.7, 5.7, 6.2, 5.1, 5.7, 6.3, 5.8, 7.1, 6.3, 6.5, 7.6, 4.9, 7.3,
     6.7, 7.2, 6.5, 6.4, 6.8, 5.7, 5.8, 6.4, 6.5, 7.7, 7.7, 6, 6.9, 5.6, 7.7, 6.3,
     6.7, 7.2, 6.2, 6.1, 6.4, 7.2, 7.4, 7.9, 6.4, 6.3, 6.1, 7.7, 6.3, 6.4, 6, 6.9,
     6.7, 6.9, 5.8, 6.8, 6.7, 6.7, 6.3, 6.5, 6.2, 5.9), Sepal.Width = c(3.5, 3, 3.2,
     3.1, 3.6, 3.9, 3.4, 3.4, 2.9, 3.1, 3.7, 3.4, 3, 3, 4, 4.4, 3.9, 3.5, 3.8, 3.8,
     3.4, 3.7, 3.6, 3.3, 3.4, 3, 3.4, 3.5, 3.4, 3.2, 3.1, 3.4, 4.1, 4.2, 3.1, 3.2,
     3.5, 3.6, 3, 3.4, 3.5, 2.3, 3.2, 3.5, 3.8, 3, 3.8, 3.2, 3.7, 3.3, 3.2, 3.2, 3.1,
     2.3, 2.8, 2.8, 3.3, 2.4, 2.9, 2.7, 2, 3, 2.2, 2.9, 2.9, 3.1, 3, 2.7, 2.2, 2.5,
     3.2, 2.8, 2.5, 2.8, 2.9, 3, 2.8, 3, 2.9, 2.6, 2.4, 2.4, 2.7, 2.7, 3, 3.4, 3.1,
     2.3, 3, 2.5, 2.6, 3, 2.6, 2.3, 2.7, 3, 2.9, 2.9, 2.5, 2.8, 3.3, 2.7, 3, 2.9,
     3, 3, 2.5, 2.9, 2.5, 3.6, 3.2, 2.7, 3, 2.5, 2.8, 3.2, 3, 3.8, 2.6, 2.2, 3.2,
     2.8, 2.8, 2.7, 3.3, 3.2, 2.8, 3, 2.8, 3, 2.8, 3.8, 2.8, 2.8, 2.6, 3, 3.4, 3.1,
     3, 3.1, 3.1, 3.1, 2.7, 3.2, 3.3, 3, 2.5, 3, 3.4, 3), Petal.Length = c(1.4, 1.4,
     1.3, 1.5, 1.4, 1.7, 1.4, 1.5, 1.4, 1.5, 1.5, 1.6, 1.4, 1.1, 1.2, 1.5, 1.3, 1.4,
     1.7, 1.5, 1.7, 1.5, 1, 1.7, 1.9, 1.6, 1.6, 1.5, 1.4, 1.6, 1.6, 1.5, 1.5, 1.4,
     1.5, 1.2, 1.3, 1.4, 1.3, 1.5, 1.3, 1.3, 1.3, 1.6, 1.9, 1.4, 1.6, 1.4, 1.5, 1.4,
     4.7, 4.5, 4.9, 4, 4.6, 4.5, 4.7, 3.3, 4.6, 3.9, 3.5, 4.2, 4, 4.7, 3.6, 4.4, 4.5,
     4.1, 4.5, 3.9, 4.8, 4, 4.9, 4.7, 4.3, 4.4, 4.8, 5, 4.5, 3.5, 3.8, 3.7, 3.9, 5.1,
     4.5, 4.5, 4.7, 4.4, 4.1, 4, 4.4, 4.6, 4, 3.3, 4.2, 4.2, 4.2, 4.3, 3, 4.1, 6,
     5.1, 5.9, 5.6, 5.8, 6.6, 4.5, 6.3, 5.8, 6.1, 5.1, 5.3, 5.5, 5, 5.1, 5.3, 5.5,
     6.7, 6.9, 5, 5.7, 4.9, 6.7, 4.9, 5.7, 6, 4.8, 4.9, 5.6, 5.8, 6.1, 6.4, 5.6, 5.1,
     5.6, 6.1, 5.6, 5.5, 4.8, 5.4, 5.6, 5.1, 5.1, 5.9, 5.7, 5.2, 5, 5.2, 5.4, 5.1),
     Petal.Width = c(0.2, 0.2, 0.2, 0.2, 0.2, 0.4, 0.3, 0.2, 0.2, 0.1, 0.2, 0.2,
     0.1, 0.1, 0.2, 0.4, 0.4, 0.3, 0.3, 0.3, 0.2, 0.4, 0.2, 0.5, 0.2, 0.2, 0.4,
     0.2, 0.2, 0.2, 0.2, 0.4, 0.1, 0.2, 0.2, 0.2, 0.2, 0.1, 0.2, 0.2, 0.3, 0.3,
     0.2, 0.6, 0.4, 0.3, 0.2, 0.2, 0.2, 0.2, 1.4, 1.5, 1.5, 1.3, 1.5, 1.3, 1.6,
     1, 1.3, 1.4, 1, 1.5, 1, 1.4, 1.3, 1.4, 1.5, 1, 1.5, 1.1, 1.8, 1.3, 1.5, 1.2,
     1.3, 1.4, 1.4, 1.7, 1.5, 1, 1.1, 1, 1.2, 1.6, 1.5, 1.6, 1.5, 1.3, 1.3, 1.3,
     1.2, 1.4, 1.2, 1, 1.3, 1.2, 1.3, 1.3, 1.1, 1.3, 2.5, 1.9, 2.1, 1.8, 2.2,
     2.1, 1.7, 1.8, 1.8, 2.5, 2, 1.9, 2.1, 2, 2.4, 2.3, 1.8, 2.2, 2.3, 1.5, 2.3,
     2, 2, 1.8, 2.1, 1.8, 1.8, 1.8, 2.1, 1.6, 1.9, 2, 2.2, 1.5, 1.4, 2.3, 2.4,
     1.8, 1.8, 2.1, 2.4, 2.3, 1.9, 2.3, 2.5, 2.3, 1.9, 2, 2.3, 1.8)), class = "data.frame", row.names = c(NA,
     150L)), `NA` = NULL, s = 0.01)
     21: predict(.model$learner.model, newx = .newdata, type = "class", ...)
     22: predict.multnet(.model$learner.model, newx = .newdata, type = "class", ...)
     23: lambda.interp(lambda, s)
     24: approx(lambda, seq(lambda), sfrac)
     25: stop("need at least two non-NA values to interpolate")
    
     ── 4. Error: TuneWrapper passed predict hyper pars correctly to base learner (@t
     need at least two non-NA values to interpolate
     1: resample(tw, binaryclass.task, rdesc) at testthat/test_base_TuneWrapper.R:51
     2: parallelMap(doResampleIteration, seq_len(rin$desc$iters), level = "mlr.resample",
     more.args = more.args)
     3: mapply(fun2, ..., MoreArgs = more.args, SIMPLIFY = FALSE, USE.NAMES = FALSE)
     4: (function (learner, task, rin, i, measures, weights, model, extract, show.info)
     {
     setSlaveOptions()
     train.i = rin$train.inds[[i]]
     test.i = rin$test.inds[[i]]
     calculateResampleIterationResult(learner = learner, task = task, i = i, train.i = train.i,
     test.i = test.i, measures = measures, weights = weights, rdesc = rin$desc,
     model = model, extract = extract, show.info = show.info)
     })(dots[[1L]][[1L]], learner = structure(list(id = "classif.glmnet.tuned", type = "classif",
     package = "glmnet", properties = NULL, par.set = structure(list(pars = structure(list(), .Names = character(0)),
     forbidden = NULL), class = "ParamSet"), par.vals = structure(list(), .Names = character(0)),
     predict.type = "prob", fix.factors.prediction = FALSE, next.learner = structure(list(
     id = "classif.glmnet", type = "classif", package = "glmnet", properties = c("numerics",
     "factors", "prob", "twoclass", "multiclass", "weights"), par.set = structure(list(
     pars = list(alpha = structure(list(id = "alpha", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), s = structure(list(id = "s", type = "numeric", len = 1L, lower = 0,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "predict"), class = c("LearnerParam", "Param")), exact = structure(list(
     id = "exact", type = "logical", len = 1L, lower = NULL, upper = NULL,
     values = list(`TRUE` = TRUE, `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = FALSE, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "predict"), class = c("LearnerParam",
     "Param")), nlambda = structure(list(id = "nlambda", type = "integer",
     len = 1L, lower = 1L, upper = Inf, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = TRUE, default = 100L, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), lambda.min.ratio = structure(list(id = "lambda.min.ratio",
     type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), lambda = structure(list(
     id = "lambda", type = "numericvector", len = NA_integer_, lower = 0,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), standardize = structure(list(
     id = "standardize", type = "logical", len = 1L, lower = NULL, upper = NULL,
     values = list(`TRUE` = TRUE, `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = TRUE, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), intercept = structure(list(id = "intercept", type = "logical",
     len = 1L, lower = NULL, upper = NULL, values = list(`TRUE` = TRUE,
     `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = TRUE, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), thresh = structure(list(
     id = "thresh", type = "numeric", len = 1L, lower = 0, upper = Inf,
     values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 1e-07, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), dfmax = structure(list(
     id = "dfmax", type = "integer", len = 1L, lower = 0L, upper = Inf,
     values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), pmax = structure(list(
     id = "pmax", type = "integer", len = 1L, lower = 0L, upper = Inf,
     values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), exclude = structure(list(
     id = "exclude", type = "integervector", len = NA_integer_, lower = 1L,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), penalty.factor = structure(list(
     id = "penalty.factor", type = "numericvector", len = NA_integer_,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), lower.limits = structure(list(id = "lower.limits", type = "numericvector",
     len = NA_integer_, lower = -Inf, upper = 0, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), upper.limits = structure(list(id = "upper.limits", type = "numericvector",
     len = NA_integer_, lower = 0, upper = Inf, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), maxit = structure(list(id = "maxit", type = "integer", len = 1L,
     lower = 1L, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 100000L, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), type.logistic = structure(list(id = "type.logistic", type = "discrete",
     len = 1L, lower = NULL, upper = NULL, values = list(Newton = "Newton",
     modified.Newton = "modified.Newton"), cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), type.multinomial = structure(list(id = "type.multinomial",
     type = "discrete", len = 1L, lower = NULL, upper = NULL, values = list(
     ungrouped = "ungrouped", grouped = "grouped"), cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), fdev = structure(list(id = "fdev", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-05, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), devmax = structure(list(id = "devmax", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 0.999, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), eps = structure(list(id = "eps", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-06, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), big = structure(list(id = "big", type = "numeric", len = 1L,
     lower = -Inf, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 9.9e+35, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), mnlam = structure(list(id = "mnlam", type = "integer", len = 1L,
     lower = 1, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 5, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), pmin = structure(list(id = "pmin", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-09, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), exmx = structure(list(id = "exmx", type = "numeric", len = 1L,
     lower = -Inf, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 250, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), prec = structure(list(id = "prec", type = "numeric", len = 1L,
     lower = -Inf, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-10, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), mxit = structure(list(id = "mxit", type = "integer", len = 1L,
     lower = 1L, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 100L, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param"))), forbidden = NULL), class = c("LearnerParamSet", "ParamSet"
     )), par.vals = list(s = 0.01), predict.type = "prob", name = "GLM with Lasso or Elasticnet Regularization",
     short.name = "glmnet", note = "The family parameter is set to `binomial` for two-class problems and to `multinomial` otherwise.\n Factors automatically get converted to dummy columns, ordered factors to integer.\n Parameter `s` (value of the regularization parameter used for predictions) is set to `0.1` by default,\n but needs to be tuned by the user.\n glmnet uses a global control object for its parameters. mlr resets all control parameters to their defaults\n before setting the specified parameters and after training.\n If you are setting glmnet.control parameters through glmnet.control,\n you need to save and re-set them after running the glmnet learner.",
     callees = c("glmnet", "glmnet.control", "predict.glmnet"), help.list = list(
     s = "Argument of: glmnet::predict.glmnet\n\nValue(s) of the penalty parameter lambda at which predictions are required. Default is the entire sequence used to create the model.",
     exact = "Argument of: glmnet::predict.glmnet\n\nThis argument is relevant only when predictions are made at values of s (lambda) different from those used in the fitting of the original model. If exact=FALSE (default), then the predict function uses linear interpolation to make predictions for values of s (lambda) that do not coincide with those used in the fitting algorithm. While this is often a good approximation, it can sometimes be a bit coarse. With exact=TRUE, these different values of s are merged (and sorted) with object$lambda, and the model is refit before predictions are made. In this case, it is required to supply the original data x= and y= as additional named arguments to predict() or coef(). The workhorse predict.glmnet() needs to update the model, and so needs the data used to create it. The same is true of weights, offset, penalty.factor, lower.limits, upper.limits if these were used in the original call. Failure to do so will result in an error.",
     fdev = "Argument of: glmnet::glmnet.control\n\nminimum fractional change in deviance for stopping path; factory default = 1.0e-5",
     devmax = "Argument of: glmnet::glmnet.control\n\nmaximum fraction of explained deviance for stopping path; factory default = 0.999",
     eps = "Argument of: glmnet::glmnet.control\n\nminimum value of lambda.min.ratio (see glmnet); factory default= 1.0e-6",
     big = "Argument of: glmnet::glmnet.control\n\nlarge floating point number; factory default = 9.9e35. Inf in definition of upper.limit is set to big",
     mnlam = "Argument of: glmnet::glmnet.control\n\nminimum number of path points (lambda values) allowed; factory default = 5",
     pmin = "Argument of: glmnet::glmnet.control\n\nminimum probability for any class. factory default = 1.0e-9. Note that this implies a pmax of 1-pmin.",
     exmx = "Argument of: glmnet::glmnet.control\n\nmaximum allowed exponent. factory default = 250.0",
     prec = "Argument of: glmnet::glmnet.control\n\nconvergence threshold for multi response bounds adjustment solution. factory default = 1.0e-10",
     mxit = "Argument of: glmnet::glmnet.control\n\nmaximum iterations for multiresponse bounds adjustment solution. factory default = 100",
     alpha = "Argument of: glmnet::glmnet\n\nThe elasticnet mixing parameter, with 0≤α≤ 1. The penalty is defined as (1-α)/2||β||_2^2+α||β||_1. alpha=1 is the lasso penalty, and alpha=0 the ridge penalty.",
     nlambda = "Argument of: glmnet::glmnet\n\nThe number of lambda values - default is 100.",
     lambda.min.ratio = "Argument of: glmnet::glmnet\n\nSmallest value for lambda, as a fraction of lambda.max, the (data derived) entry value (i.e. the smallest value for which all coefficients are zero). The default depends on the sample size nobs relative to the number of variables nvars. If nobs > nvars, the default is 0.0001, close to zero. If nobs < nvars, the default is 0.01. A very small value of lambda.min.ratio will lead to a saturated fit in the nobs < nvars case. This is undefined for \"binomial\" and \"multinomial\" models, and glmnet will exit gracefully when the percentage deviance explained is almost 1.",
     lambda = "Argument of: glmnet::glmnet\n\nA user supplied lambda sequence. Typical usage is to have the program compute its own lambda sequence based on nlambda and lambda.min.ratio. Supplying a value of lambda overrides this. WARNING: use with care. Avoid supplying a single value for lambda (for predictions after CV use predict() instead). Supply instead a decreasing sequence of lambda values. glmnet relies on its warms starts for speed, and its often faster to fit a whole path than compute a single fit.",
     standardize = "Argument of: glmnet::glmnet\n\nLogical flag for x variable standardization, prior to fitting the model sequence. The coefficients are always returned on the original scale. Default is standardize=TRUE. If variables are in the same units already, you might not wish to standardize. See details below for y standardization with family=\"gaussian\".",
     intercept = "Argument of: glmnet::glmnet\n\nShould intercept(s) be fitted (default=TRUE) or set to zero (FALSE)",
     thresh = "Argument of: glmnet::glmnet\n\nConvergence threshold for coordinate descent. Each inner coordinate-descent loop continues until the maximum change in the objective after any coefficient update is less than thresh times the null deviance. Defaults value is 1E-7.",
     dfmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables in the model. Useful for very large nvars, if a partial path is desired.",
     pmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables ever to be nonzero",
     exclude = "Argument of: glmnet::glmnet\n\nIndices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor (next item).",
     penalty.factor = "Argument of: glmnet::glmnet\n\nSeparate penalty factors can be applied to each coefficient. This is a number that multiplies lambda to allow differential shrinkage. Can be 0 for some variables, which implies no shrinkage, and that variable is always included in the model. Default is 1 for all variables (and implicitly infinity for variables listed in exclude). Note: the penalty factors are internally rescaled to sum to nvars, and the lambda sequence will reflect this change.",
     lower.limits = "Argument of: glmnet::glmnet\n\nVector of lower limits for each coefficient; default -Inf. Each of these must be non-positive. Can be presented as a single value (which will then be replicated), else a vector of length nvars",
     upper.limits = "Argument of: glmnet::glmnet\n\nVector of upper limits for each coefficient; default Inf. See lower.limits",
     maxit = "Argument of: glmnet::glmnet\n\nMaximum number of passes over the data for all lambda values; default is 10^5.",
     type.logistic = "Argument of: glmnet::glmnet\n\nIf \"Newton\" then the exact hessian is used (default), while \"modified.Newton\" uses an upper-bound on the hessian, and can be faster.",
     type.multinomial = "Argument of: glmnet::glmnet\n\nIf \"grouped\" then a grouped lasso penalty is used on the multinomial coefficients for a variable. This ensures they are all in our out together. The default is \"ungrouped\""),
     config = list(), fix.factors.prediction = FALSE), class = c("classif.glmnet",
     "RLearnerClassif", "RLearner", "Learner")), model.subclass = "TuneModel", resampling = structure(list(
     split = 0.3, id = "holdout", iters = 1L, predict = "test", stratify = FALSE), class = c("HoldoutDesc",
     "ResampleDesc")), measures = list(structure(list(id = "mmce", minimize = TRUE,
     properties = c("classif", "classif.multi", "req.pred", "req.truth"), fun = function (task,
     model, pred, feats, extra.args)
     {
     measureMMCE(pred$data$truth, pred$data$response)
     }, extra.args = list(), best = 0, worst = 1, name = "Mean misclassification error",
     note = "Defined as: mean(response != truth)", aggr = structure(list(id = "test.mean",
     name = "Test mean", fun = function (task, perf.test, perf.train, measure,
     group, pred)
     mean(perf.test), properties = "req.test"), class = "Aggregation")), class = "Measure")),
     opt.pars = structure(list(pars = list(s = structure(list(id = "s", type = "numeric",
     len = 1L, lower = 0.001, upper = 0.1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list()), class = "Param")), forbidden = NULL), class = "ParamSet"),
     bit.names = character(0), bits.to.features = function ()
     {
     }, control = structure(list(same.resampling.instance = TRUE, impute.val = NULL,
     tune.threshold = FALSE, tune.threshold.args = list(), log.fun = function (learner,
     task, resampling, measures, par.set, control, opt.path, dob, x, y, remove.nas,
     stage, prev.stage)
     {
     x.string = paramValueToString(par.set, x, show.missing.values = !remove.nas)
     if (inherits(learner, "ModelMultiplexer"))
     x.string = stri_replace_all(x.string, "", regex = stri_paste(x$selected.learner,
     "\\."))
     logFunDefault(learner, task, resampling, measures, par.set, control,
     opt.path, dob, x.string, y, remove.nas, stage, prev.stage, prefixes = c("Tune-x",
     "Tune-y"))
     }, final.dw.perc = NULL, extra.args = list(maxit = 1L), budget = 1L), class = c("TuneControlRandom",
     "TuneControl", "OptControl")), show.info = FALSE), class = c("TuneWrapper", "OptWrapper",
     "BaseWrapper", "Learner")), task = structure(list(type = "classif", env = <environment>,
     weights = NULL, blocking = NULL, coordinates = NULL, task.desc = structure(list(
     id = "binary", type = "classif", target = "Class", size = 208L, n.feat = c(numerics = 60L,
     factors = 0L, ordered = 0L, functionals = 0L), has.missings = FALSE, has.weights = FALSE,
     has.blocking = FALSE, has.coordinates = FALSE, class.levels = c("M", "R"),
     positive = "M", negative = "R", class.distribution = structure(c(M = 111L,
     R = 97L), .Dim = 2L, .Dimnames = structure(list(c("M", "R")), .Names = ""), class = "table")), class = c("ClassifTaskDesc",
     "SupervisedTaskDesc", "TaskDesc"))), class = c("ClassifTask", "SupervisedTask",
     "Task")), rin = structure(list(desc = structure(list(split = 0.3, id = "holdout",
     iters = 1L, predict = "test", stratify = FALSE), class = c("HoldoutDesc", "ResampleDesc"
     )), size = 208L, train.inds = list(c(60L, 164L, 85L, 182L, 192L, 10L, 107L, 180L,
     111L, 91L, 190L, 90L, 133L, 112L, 20L, 174L, 48L, 9L, 63L, 181L, 168L, 130L, 120L,
     184L, 121L, 187L, 100L, 108L, 53L, 27L, 172L, 160L, 122L, 140L, 5L, 83L, 131L, 38L,
     55L, 40L, 24L, 70L, 69L, 61L, 26L, 23L, 171L, 76L, 43L, 137L, 8L, 167L, 125L, 19L,
     87L, 32L, 194L, 114L, 135L, 56L, 99L, 14L)), test.inds = list(c(80L, 57L, 92L, 166L,
     165L, 161L, 89L, 151L, 126L, 141L, 1L, 94L, 74L, 119L, 68L, 22L, 47L, 127L, 79L,
     148L, 81L, 162L, 117L, 62L, 179L, 207L, 33L, 17L, 185L, 88L, 102L, 176L, 147L, 150L,
     67L, 183L, 149L, 113L, 146L, 201L, 178L, 98L, 159L, 46L, 175L, 54L, 143L, 155L, 21L,
     124L, 93L, 101L, 71L, 105L, 58L, 41L, 2L, 202L, 29L, 30L, 158L, 139L, 152L, 170L,
     13L, 45L, 66L, 25L, 50L, 156L, 39L, 136L, 132L, 169L, 77L, 144L, 195L, 154L, 73L,
     65L, 59L, 35L, 204L, 129L, 28L, 163L, 189L, 52L, 199L, 103L, 75L, 191L, 104L, 157L,
     72L, 196L, 31L, 49L, 11L, 198L, 173L, 82L, 200L, 177L, 15L, 116L, 109L, 3L, 186L,
     208L, 206L, 95L, 78L, 203L, 110L, 18L, 134L, 106L, 86L, 142L, 118L, 123L, 145L, 7L,
     153L, 115L, 36L, 84L, 44L, 16L, 6L, 51L, 205L, 188L, 12L, 37L, 193L, 96L, 64L, 138L,
     34L, 4L, 97L, 128L, 197L, 42L)), group = structure(integer(0), .Label = character(0), class = "factor")), class = "ResampleInstance"),
     weights = NULL, measures = list(structure(list(id = "mmce", minimize = TRUE,
     properties = c("classif", "classif.multi", "req.pred", "req.truth"), fun = function (task,
     model, pred, feats, extra.args)
     {
     measureMMCE(pred$data$truth, pred$data$response)
     }, extra.args = list(), best = 0, worst = 1, name = "Mean misclassification error",
     note = "Defined as: mean(response != truth)", aggr = structure(list(id = "test.mean",
     name = "Test mean", fun = function (task, perf.test, perf.train, measure,
     group, pred)
     mean(perf.test), properties = "req.test"), class = "Aggregation")), class = "Measure")),
     model = FALSE, extract = function (model)
     {
     }, show.info = FALSE)
     5: calculateResampleIterationResult(learner = learner, task = task, i = i, train.i = train.i,
     test.i = test.i, measures = measures, weights = weights, rdesc = rin$desc, model = model,
     extract = extract, show.info = show.info)
     6: train(learner, task, subset = train.i, weights = weights[train.i])
     7: measureTime(fun1({
     learner.model = fun2(fun3(do.call(trainLearner, pars)))
     }))
     8: force(expr)
     9: fun1({
     learner.model = fun2(fun3(do.call(trainLearner, pars)))
     })
     10: fun2(fun3(do.call(trainLearner, pars)))
     ...
     41: (function (.learner, .model, .newdata, ...)
     {
     if (.learner$fix.factors.prediction) {
     fls = .model$factor.levels
     ns = names(fls)
     ns = intersect(colnames(.newdata), ns)
     fls = fls[ns]
     if (length(ns) > 0L)
     .newdata[ns] = mapply(factor, x = .newdata[ns], levels = fls, SIMPLIFY = FALSE)
     }
     p = predictLearner(.learner, .model, .newdata, ...)
     p = checkPredictLearnerOutput(.learner, .model, p)
     return(p)
     })(.learner = structure(list(id = "classif.glmnet", type = "classif", package = "glmnet",
     properties = c("numerics", "factors", "prob", "twoclass", "multiclass", "weights"
     ), par.set = structure(list(pars = list(alpha = structure(list(id = "alpha",
     type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = TRUE, default = 1, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), s = structure(list(id = "s", type = "numeric", len = 1L, lower = 0,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "predict"), class = c("LearnerParam", "Param")), exact = structure(list(
     id = "exact", type = "logical", len = 1L, lower = NULL, upper = NULL, values = list(
     `TRUE` = TRUE, `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = FALSE, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "predict"), class = c("LearnerParam", "Param")), nlambda = structure(list(
     id = "nlambda", type = "integer", len = 1L, lower = 1L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 100L, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), lambda.min.ratio = structure(list(id = "lambda.min.ratio", type = "numeric",
     len = 1L, lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), lambda = structure(list(id = "lambda", type = "numericvector", len = NA_integer_,
     lower = 0, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), standardize = structure(list(id = "standardize", type = "logical", len = 1L,
     lower = NULL, upper = NULL, values = list(`TRUE` = TRUE, `FALSE` = FALSE),
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = TRUE, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), intercept = structure(list(id = "intercept", type = "logical", len = 1L,
     lower = NULL, upper = NULL, values = list(`TRUE` = TRUE, `FALSE` = FALSE),
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = TRUE, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), thresh = structure(list(id = "thresh", type = "numeric", len = 1L,
     lower = 0, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-07, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), dfmax = structure(list(id = "dfmax", type = "integer", len = 1L, lower = 0L,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), pmax = structure(list(
     id = "pmax", type = "integer", len = 1L, lower = 0L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), exclude = structure(list(id = "exclude", type = "integervector", len = NA_integer_,
     lower = 1L, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), penalty.factor = structure(list(id = "penalty.factor", type = "numericvector",
     len = NA_integer_, lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), lower.limits = structure(list(id = "lower.limits", type = "numericvector",
     len = NA_integer_, lower = -Inf, upper = 0, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), upper.limits = structure(list(id = "upper.limits", type = "numericvector",
     len = NA_integer_, lower = 0, upper = Inf, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), maxit = structure(list(id = "maxit", type = "integer", len = 1L, lower = 1L,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 100000L, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), type.logistic = structure(list(
     id = "type.logistic", type = "discrete", len = 1L, lower = NULL, upper = NULL,
     values = list(Newton = "Newton", modified.Newton = "modified.Newton"), cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), type.multinomial = structure(list(id = "type.multinomial", type = "discrete",
     len = 1L, lower = NULL, upper = NULL, values = list(ungrouped = "ungrouped",
     grouped = "grouped"), cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), fdev = structure(list(
     id = "fdev", type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1e-05, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), devmax = structure(list(id = "devmax", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 0.999, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), eps = structure(list(
     id = "eps", type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1e-06, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), big = structure(list(id = "big", type = "numeric", len = 1L, lower = -Inf,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 9.9e+35, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), mnlam = structure(list(
     id = "mnlam", type = "integer", len = 1L, lower = 1, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 5, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), pmin = structure(list(id = "pmin", type = "numeric", len = 1L, lower = 0,
     upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 1e-09, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), exmx = structure(list(
     id = "exmx", type = "numeric", len = 1L, lower = -Inf, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 250, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), prec = structure(list(id = "prec", type = "numeric", len = 1L, lower = -Inf,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 1e-10, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), mxit = structure(list(
     id = "mxit", type = "integer", len = 1L, lower = 1L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 100L, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param"))), forbidden = NULL), class = c("LearnerParamSet", "ParamSet")), par.vals = list(
     s = 0.0251183277992532), predict.type = "prob", name = "GLM with Lasso or Elasticnet Regularization",
     short.name = "glmnet", note = "The family parameter is set to `binomial` for two-class problems and to `multinomial` otherwise.\n Factors automatically get converted to dummy columns, ordered factors to integer.\n Parameter `s` (value of the regularization parameter used for predictions) is set to `0.1` by default,\n but needs to be tuned by the user.\n glmnet uses a global control object for its parameters. mlr resets all control parameters to their defaults\n before setting the specified parameters and after training.\n If you are setting glmnet.control parameters through glmnet.control,\n you need to save and re-set them after running the glmnet learner.",
     callees = c("glmnet", "glmnet.control", "predict.glmnet"), help.list = list(s = "Argument of: glmnet::predict.glmnet\n\nValue(s) of the penalty parameter lambda at which predictions are required. Default is the entire sequence used to create the model.",
     exact = "Argument of: glmnet::predict.glmnet\n\nThis argument is relevant only when predictions are made at values of s (lambda) different from those used in the fitting of the original model. If exact=FALSE (default), then the predict function uses linear interpolation to make predictions for values of s (lambda) that do not coincide with those used in the fitting algorithm. While this is often a good approximation, it can sometimes be a bit coarse. With exact=TRUE, these different values of s are merged (and sorted) with object$lambda, and the model is refit before predictions are made. In this case, it is required to supply the original data x= and y= as additional named arguments to predict() or coef(). The workhorse predict.glmnet() needs to update the model, and so needs the data used to create it. The same is true of weights, offset, penalty.factor, lower.limits, upper.limits if these were used in the original call. Failure to do so will result in an error.",
     fdev = "Argument of: glmnet::glmnet.control\n\nminimum fractional change in deviance for stopping path; factory default = 1.0e-5",
     devmax = "Argument of: glmnet::glmnet.control\n\nmaximum fraction of explained deviance for stopping path; factory default = 0.999",
     eps = "Argument of: glmnet::glmnet.control\n\nminimum value of lambda.min.ratio (see glmnet); factory default= 1.0e-6",
     big = "Argument of: glmnet::glmnet.control\n\nlarge floating point number; factory default = 9.9e35. Inf in definition of upper.limit is set to big",
     mnlam = "Argument of: glmnet::glmnet.control\n\nminimum number of path points (lambda values) allowed; factory default = 5",
     pmin = "Argument of: glmnet::glmnet.control\n\nminimum probability for any class. factory default = 1.0e-9. Note that this implies a pmax of 1-pmin.",
     exmx = "Argument of: glmnet::glmnet.control\n\nmaximum allowed exponent. factory default = 250.0",
     prec = "Argument of: glmnet::glmnet.control\n\nconvergence threshold for multi response bounds adjustment solution. factory default = 1.0e-10",
     mxit = "Argument of: glmnet::glmnet.control\n\nmaximum iterations for multiresponse bounds adjustment solution. factory default = 100",
     alpha = "Argument of: glmnet::glmnet\n\nThe elasticnet mixing parameter, with 0≤α≤ 1. The penalty is defined as (1-α)/2||β||_2^2+α||β||_1. alpha=1 is the lasso penalty, and alpha=0 the ridge penalty.",
     nlambda = "Argument of: glmnet::glmnet\n\nThe number of lambda values - default is 100.",
     lambda.min.ratio = "Argument of: glmnet::glmnet\n\nSmallest value for lambda, as a fraction of lambda.max, the (data derived) entry value (i.e. the smallest value for which all coefficients are zero). The default depends on the sample size nobs relative to the number of variables nvars. If nobs > nvars, the default is 0.0001, close to zero. If nobs < nvars, the default is 0.01. A very small value of lambda.min.ratio will lead to a saturated fit in the nobs < nvars case. This is undefined for \"binomial\" and \"multinomial\" models, and glmnet will exit gracefully when the percentage deviance explained is almost 1.",
     lambda = "Argument of: glmnet::glmnet\n\nA user supplied lambda sequence. Typical usage is to have the program compute its own lambda sequence based on nlambda and lambda.min.ratio. Supplying a value of lambda overrides this. WARNING: use with care. Avoid supplying a single value for lambda (for predictions after CV use predict() instead). Supply instead a decreasing sequence of lambda values. glmnet relies on its warms starts for speed, and its often faster to fit a whole path than compute a single fit.",
     standardize = "Argument of: glmnet::glmnet\n\nLogical flag for x variable standardization, prior to fitting the model sequence. The coefficients are always returned on the original scale. Default is standardize=TRUE. If variables are in the same units already, you might not wish to standardize. See details below for y standardization with family=\"gaussian\".",
     intercept = "Argument of: glmnet::glmnet\n\nShould intercept(s) be fitted (default=TRUE) or set to zero (FALSE)",
     thresh = "Argument of: glmnet::glmnet\n\nConvergence threshold for coordinate descent. Each inner coordinate-descent loop continues until the maximum change in the objective after any coefficient update is less than thresh times the null deviance. Defaults value is 1E-7.",
     dfmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables in the model. Useful for very large nvars, if a partial path is desired.",
     pmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables ever to be nonzero",
     exclude = "Argument of: glmnet::glmnet\n\nIndices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor (next item).",
     penalty.factor = "Argument of: glmnet::glmnet\n\nSeparate penalty factors can be applied to each coefficient. This is a number that multiplies lambda to allow differential shrinkage. Can be 0 for some variables, which implies no shrinkage, and that variable is always included in the model. Default is 1 for all variables (and implicitly infinity for variables listed in exclude). Note: the penalty factors are internally rescaled to sum to nvars, and the lambda sequence will reflect this change.",
     lower.limits = "Argument of: glmnet::glmnet\n\nVector of lower limits for each coefficient; default -Inf. Each of these must be non-positive. Can be presented as a single value (which will then be replicated), else a vector of length nvars",
     upper.limits = "Argument of: glmnet::glmnet\n\nVector of upper limits for each coefficient; default Inf. See lower.limits",
     maxit = "Argument of: glmnet::glmnet\n\nMaximum number of passes over the data for all lambda values; default is 10^5.",
     type.logistic = "Argument of: glmnet::glmnet\n\nIf \"Newton\" then the exact hessian is used (default), while \"modified.Newton\" uses an upper-bound on the hessian, and can be faster.",
     type.multinomial = "Argument of: glmnet::glmnet\n\nIf \"grouped\" then a grouped lasso penalty is used on the multinomial coefficients for a variable. This ensures they are all in our out together. The default is \"ungrouped\""),
     config = list(), fix.factors.prediction = FALSE), class = c("classif.glmnet",
     "RLearnerClassif", "RLearner", "Learner")), .model = structure(list(learner = structure(list(
     id = "classif.glmnet", type = "classif", package = "glmnet", properties = c("numerics",
     "factors", "prob", "twoclass", "multiclass", "weights"), par.set = structure(list(
     pars = list(alpha = structure(list(id = "alpha", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), s = structure(list(id = "s", type = "numeric", len = 1L, lower = 0, upper = Inf,
     values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "predict"), class = c("LearnerParam", "Param")), exact = structure(list(
     id = "exact", type = "logical", len = 1L, lower = NULL, upper = NULL,
     values = list(`TRUE` = TRUE, `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = FALSE, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "predict"), class = c("LearnerParam", "Param"
     )), nlambda = structure(list(id = "nlambda", type = "integer", len = 1L,
     lower = 1L, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 100L, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), lambda.min.ratio = structure(list(id = "lambda.min.ratio", type = "numeric",
     len = 1L, lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), lambda = structure(list(id = "lambda", type = "numericvector", len = NA_integer_,
     lower = 0, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), standardize = structure(list(id = "standardize", type = "logical", len = 1L,
     lower = NULL, upper = NULL, values = list(`TRUE` = TRUE, `FALSE` = FALSE),
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = TRUE,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), intercept = structure(list(
     id = "intercept", type = "logical", len = 1L, lower = NULL, upper = NULL,
     values = list(`TRUE` = TRUE, `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = TRUE, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), thresh = structure(list(id = "thresh", type = "numeric", len = 1L, lower = 0,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 1e-07, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), dfmax = structure(list(
     id = "dfmax", type = "integer", len = 1L, lower = 0L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), pmax = structure(list(
     id = "pmax", type = "integer", len = 1L, lower = 0L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), exclude = structure(list(
     id = "exclude", type = "integervector", len = NA_integer_, lower = 1L,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), penalty.factor = structure(list(
     id = "penalty.factor", type = "numericvector", len = NA_integer_, lower = 0,
     upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), lower.limits = structure(list(
     id = "lower.limits", type = "numericvector", len = NA_integer_, lower = -Inf,
     upper = 0, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), upper.limits = structure(list(
     id = "upper.limits", type = "numericvector", len = NA_integer_, lower = 0,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), maxit = structure(list(
     id = "maxit", type = "integer", len = 1L, lower = 1L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 100000L,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), type.logistic = structure(list(
     id = "type.logistic", type = "discrete", len = 1L, lower = NULL, upper = NULL,
     values = list(Newton = "Newton", modified.Newton = "modified.Newton"),
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), type.multinomial = structure(list(
     id = "type.multinomial", type = "discrete", len = 1L, lower = NULL, upper = NULL,
     values = list(ungrouped = "ungrouped", grouped = "grouped"), cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), fdev = structure(list(id = "fdev", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-05, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), devmax = structure(list(id = "devmax", type = "numeric", len = 1L, lower = 0,
     upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 0.999, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), eps = structure(list(
     id = "eps", type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1e-06,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), big = structure(list(
     id = "big", type = "numeric", len = 1L, lower = -Inf, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 9.9e+35,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), mnlam = structure(list(
     id = "mnlam", type = "integer", len = 1L, lower = 1, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 5, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), pmin = structure(list(id = "pmin", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-09, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), exmx = structure(list(id = "exmx", type = "numeric", len = 1L, lower = -Inf,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 250, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), prec = structure(list(
     id = "prec", type = "numeric", len = 1L, lower = -Inf, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1e-10,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), mxit = structure(list(
     id = "mxit", type = "integer", len = 1L, lower = 1L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 100L,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param"))), forbidden = NULL), class = c("LearnerParamSet",
     "ParamSet")), par.vals = list(s = 0.0251183277992532), predict.type = "prob",
     name = "GLM with Lasso or Elasticnet Regularization", short.name = "glmnet",
     note = "The family parameter is set to `binomial` for two-class problems and to `multinomial` otherwise.\n Factors automatically get converted to dummy columns, ordered factors to integer.\n Parameter `s` (value of the regularization parameter used for predictions) is set to `0.1` by default,\n but needs to be tuned by the user.\n glmnet uses a global control object for its parameters. mlr resets all control parameters to their defaults\n before setting the specified parameters and after training.\n If you are setting glmnet.control parameters through glmnet.control,\n you need to save and re-set them after running the glmnet learner.",
     callees = c("glmnet", "glmnet.control", "predict.glmnet"), help.list = list(s = "Argument of: glmnet::predict.glmnet\n\nValue(s) of the penalty parameter lambda at which predictions are required. Default is the entire sequence used to create the model.",
     exact = "Argument of: glmnet::predict.glmnet\n\nThis argument is relevant only when predictions are made at values of s (lambda) different from those used in the fitting of the original model. If exact=FALSE (default), then the predict function uses linear interpolation to make predictions for values of s (lambda) that do not coincide with those used in the fitting algorithm. While this is often a good approximation, it can sometimes be a bit coarse. With exact=TRUE, these different values of s are merged (and sorted) with object$lambda, and the model is refit before predictions are made. In this case, it is required to supply the original data x= and y= as additional named arguments to predict() or coef(). The workhorse predict.glmnet() needs to update the model, and so needs the data used to create it. The same is true of weights, offset, penalty.factor, lower.limits, upper.limits if these were used in the original call. Failure to do so will result in an error.",
     fdev = "Argument of: glmnet::glmnet.control\n\nminimum fractional change in deviance for stopping path; factory default = 1.0e-5",
     devmax = "Argument of: glmnet::glmnet.control\n\nmaximum fraction of explained deviance for stopping path; factory default = 0.999",
     eps = "Argument of: glmnet::glmnet.control\n\nminimum value of lambda.min.ratio (see glmnet); factory default= 1.0e-6",
     big = "Argument of: glmnet::glmnet.control\n\nlarge floating point number; factory default = 9.9e35. Inf in definition of upper.limit is set to big",
     mnlam = "Argument of: glmnet::glmnet.control\n\nminimum number of path points (lambda values) allowed; factory default = 5",
     pmin = "Argument of: glmnet::glmnet.control\n\nminimum probability for any class. factory default = 1.0e-9. Note that this implies a pmax of 1-pmin.",
     exmx = "Argument of: glmnet::glmnet.control\n\nmaximum allowed exponent. factory default = 250.0",
     prec = "Argument of: glmnet::glmnet.control\n\nconvergence threshold for multi response bounds adjustment solution. factory default = 1.0e-10",
     mxit = "Argument of: glmnet::glmnet.control\n\nmaximum iterations for multiresponse bounds adjustment solution. factory default = 100",
     alpha = "Argument of: glmnet::glmnet\n\nThe elasticnet mixing parameter, with 0≤α≤ 1. The penalty is defined as (1-α)/2||β||_2^2+α||β||_1. alpha=1 is the lasso penalty, and alpha=0 the ridge penalty.",
     nlambda = "Argument of: glmnet::glmnet\n\nThe number of lambda values - default is 100.",
     lambda.min.ratio = "Argument of: glmnet::glmnet\n\nSmallest value for lambda, as a fraction of lambda.max, the (data derived) entry value (i.e. the smallest value for which all coefficients are zero). The default depends on the sample size nobs relative to the number of variables nvars. If nobs > nvars, the default is 0.0001, close to zero. If nobs < nvars, the default is 0.01. A very small value of lambda.min.ratio will lead to a saturated fit in the nobs < nvars case. This is undefined for \"binomial\" and \"multinomial\" models, and glmnet will exit gracefully when the percentage deviance explained is almost 1.",
     lambda = "Argument of: glmnet::glmnet\n\nA user supplied lambda sequence. Typical usage is to have the program compute its own lambda sequence based on nlambda and lambda.min.ratio. Supplying a value of lambda overrides this. WARNING: use with care. Avoid supplying a single value for lambda (for predictions after CV use predict() instead). Supply instead a decreasing sequence of lambda values. glmnet relies on its warms starts for speed, and its often faster to fit a whole path than compute a single fit.",
     standardize = "Argument of: glmnet::glmnet\n\nLogical flag for x variable standardization, prior to fitting the model sequence. The coefficients are always returned on the original scale. Default is standardize=TRUE. If variables are in the same units already, you might not wish to standardize. See details below for y standardization with family=\"gaussian\".",
     intercept = "Argument of: glmnet::glmnet\n\nShould intercept(s) be fitted (default=TRUE) or set to zero (FALSE)",
     thresh = "Argument of: glmnet::glmnet\n\nConvergence threshold for coordinate descent. Each inner coordinate-descent loop continues until the maximum change in the objective after any coefficient update is less than thresh times the null deviance. Defaults value is 1E-7.",
     dfmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables in the model. Useful for very large nvars, if a partial path is desired.",
     pmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables ever to be nonzero",
     exclude = "Argument of: glmnet::glmnet\n\nIndices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor (next item).",
     penalty.factor = "Argument of: glmnet::glmnet\n\nSeparate penalty factors can be applied to each coefficient. This is a number that multiplies lambda to allow differential shrinkage. Can be 0 for some variables, which implies no shrinkage, and that variable is always included in the model. Default is 1 for all variables (and implicitly infinity for variables listed in exclude). Note: the penalty factors are internally rescaled to sum to nvars, and the lambda sequence will reflect this change.",
     lower.limits = "Argument of: glmnet::glmnet\n\nVector of lower limits for each coefficient; default -Inf. Each of these must be non-positive. Can be presented as a single value (which will then be replicated), else a vector of length nvars",
     upper.limits = "Argument of: glmnet::glmnet\n\nVector of upper limits for each coefficient; default Inf. See lower.limits",
     maxit = "Argument of: glmnet::glmnet\n\nMaximum number of passes over the data for all lambda values; default is 10^5.",
     type.logistic = "Argument of: glmnet::glmnet\n\nIf \"Newton\" then the exact hessian is used (default), while \"modified.Newton\" uses an upper-bound on the hessian, and can be faster.",
     type.multinomial = "Argument of: glmnet::glmnet\n\nIf \"grouped\" then a grouped lasso penalty is used on the multinomial coefficients for a variable. This ensures they are all in our out together. The default is \"ungrouped\""),
     config = list(), fix.factors.prediction = FALSE), class = c("classif.glmnet",
     "RLearnerClassif", "RLearner", "Learner")), learner.model = structure(list(a0 = c(s0 = 0.22314355131421,
     s1 = 0.22314355131421, s2 = 0.22314355131421, s3 = 0.22314355131421, s4 = 0.22314355131421
     ), beta = new("dgCMatrix", i = c(0L, 0L, 0L, 0L, 0L), p = 0:5, Dim = c(60L, 5L),
     Dimnames = list(c("V1", "V2", "V3", "V4", "V5", "V6", "V7", "V8", "V9", "V10",
     "V11", "V12", "V13", "V14", "V15", "V16", "V17", "V18", "V19", "V20", "V21",
     "V22", "V23", "V24", "V25", "V26", "V27", "V28", "V29", "V30", "V31", "V32",
     "V33", "V34", "V35", "V36", "V37", "V38", "V39", "V40", "V41", "V42", "V43",
     "V44", "V45", "V46", "V47", "V48", "V49", "V50", "V51", "V52", "V53", "V54",
     "V55", "V56", "V57", "V58", "V59", "V60"), c("s0", "s1", "s2", "s3", "s4")),
     x = c(0, 0, 0, 0, 0), factors = list()), df = c(0, 0, 0, 0, 0), dim = c(60L,
     5L), lambda = c(NaN, NaN, NaN, NaN, NaN), dev.ratio = c(2.53231395323395e-16, 3.94643255846773e-16,
     3.94643255846773e-16, 3.94643255846773e-16, 3.94643255846773e-16), nulldev = 24.7306167575036,
     npasses = 6L, jerr = 0L, offset = FALSE, classnames = c("M", "R"), call = (function (x,
     y, family = c("gaussian", "binomial", "poisson", "multinomial", "cox", "mgaussian"),
     weights, offset = NULL, alpha = 1, nlambda = 100, lambda.min.ratio = ifelse(nobs <
     nvars, 0.01, 1e-04), lambda = NULL, standardize = TRUE, intercept = TRUE,
     thresh = 1e-07, dfmax = nvars + 1, pmax = min(dfmax * 2 + 20, nvars), exclude,
     penalty.factor = rep(1, nvars), lower.limits = -Inf, upper.limits = Inf,
     maxit = 1e+05, type.gaussian = ifelse(nvars < 500, "covariance", "naive"),
     type.logistic = c("Newton", "modified.Newton"), standardize.response = FALSE,
     type.multinomial = c("ungrouped", "grouped"))
     {
     family = match.arg(family)
     if (alpha > 1) {
     warning("alpha >1; set to 1")
     alpha = 1
     }
     if (alpha < 0) {
     warning("alpha<0; set to 0")
     alpha = 0
     }
     alpha = as.double(alpha)
     this.call = match.call()
     nlam = as.integer(nlambda)
     y = drop(y)
     np = dim(x)
     if (is.null(np) | (np[2] <= 1))
     stop("x should be a matrix with 2 or more columns")
     nobs = as.integer(np[1])
     if (missing(weights))
     weights = rep(1, nobs)
     else if (length(weights) != nobs)
     stop(paste("number of elements in weights (", length(weights), ") not equal to the number of rows of x (",
     nobs, ")", sep = ""))
     nvars = as.integer(np[2])
     dimy = dim(y)
     nrowy = ifelse(is.null(dimy), length(y), dimy[1])
     if (nrowy != nobs)
     stop(paste("number of observations in y (", nrowy, ") not equal to the number of rows of x (",
     nobs, ")", sep = ""))
     vnames = colnames(x)
     if (is.null(vnames))
     vnames = paste("V", seq(nvars), sep = "")
     ne = as.integer(dfmax)
     nx = as.integer(pmax)
     if (missing(exclude))
     exclude = integer(0)
     if (any(penalty.factor == Inf)) {
     exclude = c(exclude, seq(nvars)[penalty.factor == Inf])
     exclude = sort(unique(exclude))
     }
     if (length(exclude) > 0) {
     jd = match(exclude, seq(nvars), 0)
     if (!all(jd > 0))
     stop("Some excluded variables out of range")
     penalty.factor[jd] = 1
     jd = as.integer(c(length(jd), jd))
     }
     else jd = as.integer(0)
     vp = as.double(penalty.factor)
     internal.parms = glmnet.control()
     if (any(lower.limits > 0)) {
     stop("Lower limits should be non-positive")
     }
     if (any(upper.limits < 0)) {
     stop("Upper limits should be non-negative")
     }
     lower.limits[lower.limits == -Inf] = -internal.parms$big
     upper.limits[upper.limits == Inf] = internal.parms$big
     if (length(lower.limits) < nvars) {
     if (length(lower.limits) == 1)
     lower.limits = rep(lower.limits, nvars)
     else stop("Require length 1 or nvars lower.limits")
     }
     else lower.limits = lower.limits[seq(nvars)]
     if (length(upper.limits) < nvars) {
     if (length(upper.limits) == 1)
     upper.limits = rep(upper.limits, nvars)
     else stop("Require length 1 or nvars upper.limits")
     }
     else upper.limits = upper.limits[seq(nvars)]
     cl = rbind(lower.limits, upper.limits)
     if (any(cl == 0)) {
     fdev = glmnet.control()$fdev
     if (fdev != 0) {
     glmnet.control(fdev = 0)
     on.exit(glmnet.control(fdev = fdev))
     }
     }
     storage.mode(cl) = "double"
     isd = as.integer(standardize)
     intr = as.integer(intercept)
     if (!missing(intercept) && family == "cox")
     warning("Cox model has no intercept")
     jsd = as.integer(standardize.response)
     thresh = as.double(thresh)
     if (is.null(lambda)) {
     if (lambda.min.ratio >= 1)
     stop("lambda.min.ratio should be less than 1")
     flmin = as.double(lambda.min.ratio)
     ulam = double(1)
     }
     else {
     flmin = as.double(1)
     if (any(lambda < 0))
     stop("lambdas should be non-negative")
     ulam = as.double(rev(sort(lambda)))
     nlam = as.integer(length(lambda))
     }
     is.sparse = FALSE
     ix = jx = NULL
     if (inherits(x, "sparseMatrix")) {
     is.sparse = TRUE
     x = as(x, "CsparseMatrix")
     x = as(x, "dgCMatrix")
     ix = as.integer(x@p + 1)
     jx = as.integer(x@i + 1)
     x = as.double(x@x)
     }
     kopt = switch(match.arg(type.logistic), Newton = 0, modified.Newton = 1)
     if (family == "multinomial") {
     type.multinomial = match.arg(type.multinomial)
     if (type.multinomial == "grouped")
     kopt = 2
     }
     kopt = as.integer(kopt)
     fit = switch(family, gaussian = elnet(x, is.sparse, ix, jx, y, weights, offset,
     type.gaussian, alpha, nobs, nvars, jd, vp, cl, ne, nx, nlam, flmin, ulam,
     thresh, isd, intr, vnames, maxit), poisson = fishnet(x, is.sparse, ix,
     jx, y, weights, offset, alpha, nobs, nvars, jd, vp, cl, ne, nx, nlam,
     flmin, ulam, thresh, isd, intr, vnames, maxit), binomial = lognet(x,
     is.sparse, ix, jx, y, weights, offset, alpha, nobs, nvars, jd, vp, cl,
     ne, nx, nlam, flmin, ulam, thresh, isd, intr, vnames, maxit, kopt, family),
     multinomial = lognet(x, is.sparse, ix, jx, y, weights, offset, alpha,
     nobs, nvars, jd, vp, cl, ne, nx, nlam, flmin, ulam, thresh, isd,
     intr, vnames, maxit, kopt, family), cox = coxnet(x, is.sparse, ix,
     jx, y, weights, offset, alpha, nobs, nvars, jd, vp, cl, ne, nx, nlam,
     flmin, ulam, thresh, isd, vnames, maxit), mgaussian = mrelnet(x,
     is.sparse, ix, jx, y, weights, offset, alpha, nobs, nvars, jd, vp,
     cl, ne, nx, nlam, flmin, ulam, thresh, isd, jsd, intr, vnames, maxit))
     if (is.null(lambda))
     fit$lambda = fix.lam(fit$lambda)
     fit$call = this.call
     fit$nobs = nobs
     class(fit) = c(class(fit), "glmnet")
     fit
     })(x = structure(c(0.0223, 0.0211, 0.0346, 0.0388, 0.0188, 0.0378, 0.0151, 0.1088,
     0.0084, 0.0201, 0.0283, 0.0096, 0.0164, 0.0087, 0.0315, 0.0195, 0.0235, 0.0072,
     0.0375, 0.0319, 0.0509, 0.0324, 0.037, 0.0318, 0.032, 0.1278, 0.0153, 0.0116,
     0.0599, 0.0404, 0.0627, 0.0046, 0.0252, 0.0142, 0.0291, 0.0027, 0.0484, 0.0415,
     0.0079, 0.0688, 0.0953, 0.0423, 0.0599, 0.0926, 0.0291, 0.0123, 0.0656, 0.0682,
     0.0738, 0.0081, 0.0167, 0.0181, 0.0749, 0.0089, 0.0475, 0.0286, 0.0243, 0.0898,
     0.0824, 0.035, 0.105, 0.1234, 0.0432, 0.0245, 0.0229, 0.0688, 0.0608, 0.023,
     0.0479, 0.0406, 0.0519, 0.0061, 0.0647, 0.0121, 0.0432, 0.1267, 0.0249, 0.1787,
     0.1163, 0.1276, 0.0951, 0.0547, 0.0839, 0.0887, 0.0233, 0.0586, 0.0902, 0.0391,
     0.0227, 0.042, 0.0591, 0.0438, 0.0735, 0.1515, 0.0488, 0.1635, 0.1734, 0.1731,
     0.0752, 0.0208, 0.1673, 0.0932, 0.1048, 0.0682, 0.1057, 0.0249, 0.0834, 0.0865,
     0.0753, 0.1299, 0.0938, 0.2134, 0.1424, 0.0887, 0.1679, 0.1948, 0.0414, 0.0891,
     0.1154, 0.0955, 0.1338, 0.0993, 0.1024, 0.0892, 0.0677, 0.1182, 0.0098, 0.139,
     0.1134, 0.2613, 0.1972, 0.0817, 0.1119, 0.4262, 0.0259, 0.0836, 0.1098, 0.214,
     0.0644, 0.0717, 0.1209, 0.0973, 0.2002, 0.0999, 0.0684, 0.0695, 0.1228, 0.2832,
     0.1873, 0.1779, 0.0889, 0.6828, 0.0692, 0.1335, 0.137, 0.2546, 0.1522, 0.0576,
     0.1241, 0.084, 0.2876, 0.1976, 0.1487, 0.0568, 0.1508, 0.2718, 0.1806, 0.2053,
     0.1205, 0.5761, 0.1753, 0.1199, 0.1767, 0.2952, 0.078, 0.0818, 0.1533, 0.1191,
     0.3674, 0.2318, 0.1156, 0.0869, 0.1809, 0.3645, 0.2139, 0.3135, 0.0847, 0.4733,
     0.197, 0.1742, 0.1995, 0.4025, 0.1791, 0.1315, 0.2128, 0.1522, 0.2974, 0.2472,
     0.1654, 0.1935, 0.239, 0.3934, 0.1523, 0.3118, 0.1518, 0.2362, 0.1167, 0.1387,
     0.2869, 0.5148, 0.2681, 0.1862, 0.2536, 0.1322, 0.0837, 0.288, 0.3833, 0.1478,
     0.2947, 0.3843, 0.1975, 0.3686, 0.2305, 0.1023, 0.1683, 0.2042, 0.3275, 0.4901,
     0.1788, 0.2789, 0.2686, 0.1434, 0.1912, 0.2126, 0.3598, 0.1871, 0.2866, 0.4677,
     0.4844, 0.3885, 0.2793, 0.2904, 0.0814, 0.258, 0.3769, 0.4127, 0.1039, 0.2579,
     0.2803, 0.1244, 0.504, 0.0708, 0.1713, 0.1994, 0.401, 0.5364, 0.7298, 0.585,
     0.3404, 0.4713, 0.2179, 0.2616, 0.4169, 0.3575, 0.198, 0.224, 0.1886, 0.0653,
     0.6352, 0.1194, 0.1136, 0.3283, 0.5325, 0.4823, 0.7807, 0.7868, 0.4527, 0.4659,
     0.5121, 0.2097, 0.5036, 0.3447, 0.3234, 0.2568, 0.1485, 0.089, 0.6804, 0.2808,
     0.0349, 0.6861, 0.5486, 0.4835, 0.7906, 0.9739, 0.695, 0.1415, 0.7231, 0.2532,
     0.618, 0.3068, 0.3748, 0.2933, 0.216, 0.1226, 0.7505, 0.4221, 0.3796, 0.5814,
     0.5823, 0.5862, 0.6122, 1, 0.8807, 0.0849, 0.7776, 0.3213, 0.8025, 0.2945, 0.2586,
     0.2991, 0.2417, 0.1846, 0.6595, 0.5279, 0.7401, 0.25, 0.6041, 0.7579, 0.42, 0.9843,
     0.9154, 0.3257, 0.6222, 0.4327, 0.9333, 0.4351, 0.368, 0.3924, 0.2989, 0.388,
     0.4509, 0.5857, 0.9925, 0.1734, 0.6749, 0.6997, 0.2807, 0.861, 0.7542, 0.9007,
     0.3501, 0.476, 0.9399, 0.7264, 0.3508, 0.4691, 0.3341, 0.3658, 0.2964, 0.6153,
     0.9802, 0.3363, 0.7084, 0.6918, 0.5148, 0.8443, 0.6736, 0.9312, 0.3733, 0.5328,
     0.9275, 0.8147, 0.5606, 0.5665, 0.3786, 0.2297, 0.4019, 0.6753, 0.889, 0.5588,
     0.789, 0.8633, 0.7569, 0.9061, 0.7146, 0.4856, 0.2622, 0.6057, 0.945, 0.8103,
     0.5231, 0.6464, 0.3956, 0.261, 0.6794, 0.7873, 0.6712, 0.6592, 0.9284, 0.9107,
     0.8596, 0.5847, 0.8335, 0.1346, 0.3776, 0.6696, 0.8328, 0.6665, 0.5469, 0.6774,
     0.5232, 0.4193, 0.8297, 0.8974, 0.4286, 0.7012, 0.9781, 0.9346, 1, 0.4033, 0.7701,
     0.1604, 0.7361, 0.7476, 0.7773, 0.6958, 0.6954, 0.7577, 0.6913, 0.5848, 1, 0.9828,
     0.3374, 0.8099, 0.9738, 0.7884, 0.8457, 0.5946, 0.6993, 0.2737, 0.8673, 0.893,
     0.7007, 0.7748, 0.6352, 0.8856, 0.7868, 0.5643, 0.824, 1, 0.7366, 0.8901, 1,
     0.8585, 0.6797, 0.6793, 0.6543, 0.5609, 0.8223, 0.9405, 0.6154, 0.8688, 0.6757,
     0.9419, 0.8337, 0.5448, 0.7115, 0.846, 0.9611, 0.8745, 0.9702, 0.9261, 0.6971,
     0.6389, 0.504, 0.3654, 0.7772, 1, 0.581, 1, 0.8499, 1, 0.9199, 0.4772, 0.7726,
     0.6055, 0.7353, 0.7887, 0.9956, 0.708, 0.5843, 0.5002, 0.4926, 0.6139, 0.7862,
     0.9785, 0.4454, 0.9941, 0.8025, 0.8564, 1, 0.6897, 0.6124, 0.3036, 0.4856, 0.8725,
     0.8235, 0.5779, 0.4772, 0.5578, 0.4992, 0.547, 0.5652, 0.8473, 0.3707, 0.8793,
     0.6563, 0.679, 0.899, 0.9797, 0.4936, 0.0144, 0.1594, 0.9376, 0.602, 0.5215,
     0.5201, 0.4831, 0.4161, 0.8474, 0.3635, 0.7639, 0.2891, 0.6482, 0.8591, 0.5587,
     0.6456, 1, 0.5648, 0.2526, 0.3007, 0.892, 0.5342, 0.4505, 0.4241, 0.4729, 0.1631,
     0.5638, 0.3534, 0.6701, 0.2185, 0.5876, 0.6655, 0.4147, 0.5967, 0.9546, 0.4906,
     0.4335, 0.4096, 0.7508, 0.4867, 0.3129, 0.1592, 0.3318, 0.0404, 0.5443, 0.3865,
     0.4989, 0.1711, 0.6408, 0.5369, 0.2946, 0.4355, 0.8835, 0.182, 0.4918, 0.317,
     0.6832, 0.3526, 0.1448, 0.1668, 0.3969, 0.0637, 0.5086, 0.337, 0.3718, 0.3578,
     0.4972, 0.3118, 0.2025, 0.2997, 0.7662, 0.1811, 0.5409, 0.3305, 0.761, 0.1566,
     0.1046, 0.0588, 0.3894, 0.2962, 0.6253, 0.1693, 0.2196, 0.3947, 0.2755, 0.3763,
     0.0688, 0.2294, 0.6547, 0.1107, 0.5961, 0.3408, 0.9017, 0.0946, 0.182, 0.3967,
     0.2314, 0.3609, 0.8497, 0.2627, 0.1416, 0.2867, 0.03, 0.2801, 0.1171, 0.1866,
     0.5447, 0.4603, 0.5248, 0.2186, 1, 0.1613, 0.1519, 0.7147, 0.1036, 0.1866, 0.8406,
     0.3195, 0.268, 0.2401, 0.3356, 0.0875, 0.2157, 0.0922, 0.4593, 0.665, 0.3777,
     0.2463, 0.9123, 0.2824, 0.1017, 0.7319, 0.1312, 0.0476, 0.842, 0.1388, 0.263,
     0.3619, 0.3167, 0.3319, 0.2216, 0.1829, 0.4679, 0.6423, 0.2369, 0.2726, 0.7388,
     0.339, 0.1438, 0.3509, 0.0864, 0.1497, 0.9136, 0.1048, 0.3104, 0.3314, 0.4133,
     0.4237, 0.2776, 0.1743, 0.1987, 0.2166, 0.172, 0.168, 0.5915, 0.3019, 0.1986,
     0.0589, 0.2569, 0.2405, 0.7713, 0.1681, 0.3392, 0.3763, 0.6281, 0.1801, 0.2309,
     0.2452, 0.0699, 0.1951, 0.1878, 0.2792, 0.4057, 0.2945, 0.2039, 0.269, 0.3179,
     0.198, 0.4882, 0.191, 0.2123, 0.4767, 0.4977, 0.3743, 0.1444, 0.2407, 0.1493,
     0.4947, 0.325, 0.2558, 0.3019, 0.2978, 0.2778, 0.42, 0.2649, 0.3175, 0.3724,
     0.1174, 0.117, 0.4059, 0.2613, 0.4627, 0.1513, 0.2518, 0.1713, 0.4925, 0.2575,
     0.174, 0.2331, 0.2676, 0.2879, 0.3874, 0.2714, 0.2379, 0.4469, 0.0933, 0.2655,
     0.3661, 0.4697, 0.1614, 0.1745, 0.3184, 0.1654, 0.4041, 0.2423, 0.2121, 0.2931,
     0.2055, 0.1331, 0.244, 0.1713, 0.1716, 0.4586, 0.0856, 0.2203, 0.232, 0.4806,
     0.2494, 0.1756, 0.1685, 0.26, 0.2402, 0.2706, 0.1099, 0.2298, 0.2069, 0.114,
     0.2, 0.0584, 0.1559, 0.4491, 0.0951, 0.1541, 0.145, 0.4921, 0.3202, 0.1424, 0.0675,
     0.3846, 0.1392, 0.2323, 0.0985, 0.2391, 0.1625, 0.131, 0.2307, 0.123, 0.1556,
     0.5616, 0.0986, 0.1464, 0.1017, 0.5294, 0.2265, 0.0908, 0.1186, 0.3754, 0.1779,
     0.1724, 0.1271, 0.191, 0.1216, 0.1433, 0.1886, 0.22, 0.0422, 0.4305, 0.0956,
     0.1044, 0.1111, 0.2216, 0.1146, 0.0138, 0.1833, 0.2414, 0.1946, 0.1457, 0.1459,
     0.1096, 0.1013, 0.0624, 0.196, 0.2198, 0.0493, 0.0945, 0.0426, 0.1225, 0.0655,
     0.1401, 0.0476, 0.0469, 0.1878, 0.1077, 0.1723, 0.1175, 0.1164, 0.03, 0.0744,
     0.01, 0.1701, 0.1074, 0.0476, 0.0794, 0.0407, 0.0745, 0.0271, 0.1888, 0.0943,
     0.048, 0.1114, 0.0224, 0.1522, 0.0868, 0.0777, 0.0171, 0.0386, 0.0098, 0.1366,
     0.0423, 0.0219, 0.0274, 0.0106, 0.049, 0.0244, 0.0947, 0.0824, 0.0159, 0.031,
     0.0155, 0.0929, 0.0392, 0.0439, 0.0383, 0.005, 0.0131, 0.0398, 0.0162, 0.0059,
     0.0154, 0.0179, 0.0224, 0.0179, 0.0134, 0.0171, 0.0045, 0.0143, 0.0187, 0.0179,
     0.0131, 0.0061, 0.0053, 0.0146, 0.0152, 0.0143, 0.0093, 0.0086, 0.014, 0.0056,
     0.0032, 0.0109, 0.031, 0.0244, 0.0015, 0.0138, 0.0125, 0.0242, 0.0092, 0.0145,
     0.009, 0.004, 0.0255, 0.0093, 0.0046, 0.0061, 0.0455, 0.0236, 0.0076, 0.0147,
     0.0237, 0.0258, 0.0052, 0.0108, 0.0028, 0.0083, 0.0078, 0.0128, 0.0042, 0.0122,
     0.0071, 0.0033, 0.0044, 0.0015, 0.0213, 0.0114, 0.0045, 0.017, 0.0078, 0.0143,
     0.0038, 0.0062, 0.0067, 0.0037, 0.0071, 0.0145, 0.0153, 0.0107, 0.0263, 0.0113,
     0.0078, 0.0084, 0.0082, 0.0136, 0.0056, 0.0158, 0.0144, 0.0226, 0.0079, 0.0044,
     0.012, 0.0095, 0.0081, 0.0058, 0.0106, 0.0112, 0.0079, 0.003, 0.0102, 0.0128,
     0.0124, 0.0117, 0.0075, 0.0046, 0.017, 0.0187, 0.0114, 0.0072, 0.0012, 0.0105,
     0.0034, 0.0049, 0.002, 0.0102, 0.0111, 0.0057, 0.0065, 0.0054, 0.0167, 0.006,
     0.0037, 0.0073, 0.0012, 0.0185, 0.005, 7e-04, 0.0022, 0.003, 0.0064, 0.0065,
     0.0105, 0.0052, 0.0107, 0.009, 0.0061, 0.0011, 0.0103, 0.0058, 0.0045, 0.0054,
     0.0109, 0.011, 0.003, 0.0054, 0.0058, 0.0132, 0.0037, 0.0093, 0.0049, 0.0024,
     0.0068, 0.0057, 0.0062, 0.0019, 0.0205, 0.0031, 0.0029, 0.0033, 0.0036, 0.0094,
     0.0064, 0.0035, 0.0042, 0.0068, 0.0036, 0.0059, 0.007, 0.0079, 0.0097, 0.0068,
     0.0043, 0.0023, 0.0178, 0.0072, 8e-04, 0.0045, 0.0043, 0.0078, 0.0058, 1e-04,
     0.0067, 0.0108, 0.0012, 0.0022, 0.008, 0.0031, 0.0067, 0.0024, 0.0053, 0.0062,
     0.0187, 0.0045, 0.0018, 0.0079, 0.0018, 0.0112, 0.003, 0.0055, 0.0012, 0.009,
     0.0037), .Dim = c(18L, 60L), .Dimnames = list(c("9", "43", "121", "125", "87",
     "85", "27", "137", "32", "56", "114", "184", "140", "53", "192", "69", "90",
     "164"), c("V1", "V2", "V3", "V4", "V5", "V6", "V7", "V8", "V9", "V10", "V11",
     "V12", "V13", "V14", "V15", "V16", "V17", "V18", "V19", "V20", "V21", "V22",
     "V23", "V24", "V25", "V26", "V27", "V28", "V29", "V30", "V31", "V32", "V33",
     "V34", "V35", "V36", "V37", "V38", "V39", "V40", "V41", "V42", "V43", "V44",
     "V45", "V46", "V47", "V48", "V49", "V50", "V51", "V52", "V53", "V54", "V55",
     "V56", "V57", "V58", "V59", "V60"))), y = structure(c(2L, 2L, 1L, 1L, 2L, 2L,
     2L, 1L, 2L, 2L, 1L, 1L, 1L, 2L, 1L, 2L, 2L, 1L), .Label = c("M", "R"), class = "factor"),
     family = "binomial"), nobs = 18L), class = c("lognet", "glmnet"), mlr.train.info = structure(list(
     factors = structure(list(), .Names = character(0)), ordered = structure(list(), .Names = character(0)),
     restore.levels = FALSE, factors.to.dummies = FALSE, ordered.to.int = FALSE), class = "FixDataInfo")),
     task.desc = structure(list(id = "binary", type = "classif", target = "Class",
     size = 18L, n.feat = c(numerics = 60L, factors = 0L, ordered = 0L, functionals = 0L
     ), has.missings = FALSE, has.weights = FALSE, has.blocking = FALSE, has.coordinates = FALSE,
     class.levels = c("M", "R"), positive = "M", negative = "R", class.distribution = structure(c(M = 8L,
     R = 10L), .Dim = 2L, .Dimnames = structure(list(c("M", "R")), .Names = ""), class = "table")), class = c("ClassifTaskDesc",
     "SupervisedTaskDesc", "TaskDesc")), subset = c(18L, 49L, 25L, 53L, 55L, 3L, 30L,
     50L, 56L, 60L, 58L, 24L, 34L, 29L, 5L, 43L, 12L, 2L), features = c("V1", "V2",
     "V3", "V4", "V5", "V6", "V7", "V8", "V9", "V10", "V11", "V12", "V13", "V14",
     "V15", "V16", "V17", "V18", "V19", "V20", "V21", "V22", "V23", "V24", "V25",
     "V26", "V27", "V28", "V29", "V30", "V31", "V32", "V33", "V34", "V35", "V36",
     "V37", "V38", "V39", "V40", "V41", "V42", "V43", "V44", "V45", "V46", "V47",
     "V48", "V49", "V50", "V51", "V52", "V53", "V54", "V55", "V56", "V57", "V58",
     "V59", "V60"), factor.levels = list(Class = c("M", "R")), time = 0.0120000000000005,
     dump = NULL), class = "WrappedModel"), .newdata = structure(list(V1 = c(0.0137,
     0.1083, 0.027, 0.0115, 0.0333, 0.0203, 0.0443, 0.0132, 0.0235, 0.0329, 0.0394, 0.0201,
     0.1371, 0.0202, 0.0126, 0.053, 0.0164, 0.0373, 0.0411, 0.0216, 0.0091, 0.0519, 0.0134,
     0.021, 0.0201, 0.0099, 0.0968, 0.0261, 0.0423, 0.0126, 0.0409, 0.0179, 0.031, 0.013,
     0.0331, 0.0762, 0.0428, 0.0158, 0.1313, 0.009, 0.0125, 0.0086, 0.0209, 0.0162), V2 = c(0.0297,
     0.107, 0.0092, 0.015, 0.0221, 0.0121, 0.0446, 0.008, 0.022, 0.0216, 0.042, 0.0026,
     0.1226, 0.0104, 0.0519, 0.0885, 0.0173, 0.0281, 0.0277, 0.0215, 0.0213, 0.0548, 0.0172,
     0.0121, 0.0423, 0.0484, 0.0821, 0.0266, 0.0321, 0.0149, 0.0421, 0.0136, 0.0221, 6e-04,
     0.0423, 0.0666, 0.0555, 0.0239, 0.2339, 0.0062, 0.0152, 0.0215, 0.0191, 0.0041),
     V3 = c(0.0116, 0.0257, 0.0145, 0.0136, 0.027, 0.038, 0.0235, 0.0188, 0.0167,
     0.0386, 0.0446, 0.0138, 0.1385, 0.0325, 0.0621, 0.1997, 0.0347, 0.0232, 0.0604,
     0.0273, 0.0206, 0.0842, 0.0178, 0.0203, 0.0554, 0.0299, 0.0629, 0.0223, 0.0709,
     0.0641, 0.0573, 0.0408, 0.0433, 0.0088, 0.0474, 0.0481, 0.0708, 0.015, 0.3059,
     0.0253, 0.0218, 0.0242, 0.0411, 0.0239), V4 = c(0.0082, 0.0837, 0.0278, 0.0076,
     0.0481, 0.0128, 0.1008, 0.0141, 0.0516, 0.0627, 0.0551, 0.0062, 0.1484, 0.0239,
     0.0518, 0.2604, 0.007, 0.0225, 0.0525, 0.0139, 0.0505, 0.0319, 0.0363, 0.1036,
     0.0783, 0.0297, 0.0608, 0.0749, 0.0108, 0.1732, 0.013, 0.0633, 0.0191, 0.0456,
     0.0818, 0.0394, 0.0618, 0.0494, 0.4264, 0.0489, 0.0175, 0.0445, 0.0321, 0.0441
     ), V5 = c(0.0241, 0.0748, 0.0412, 0.0211, 0.0679, 0.0537, 0.2252, 0.0436, 0.0746,
     0.1158, 0.0597, 0.0133, 0.1776, 0.0807, 0.1072, 0.3225, 0.0187, 0.0179, 0.0489,
     0.0357, 0.0657, 0.1158, 0.0444, 0.1675, 0.062, 0.0652, 0.0617, 0.1364, 0.107,
     0.2565, 0.0183, 0.0596, 0.0964, 0.0525, 0.0835, 0.059, 0.1215, 0.0988, 0.401,
     0.1197, 0.0362, 0.0667, 0.0698, 0.063), V6 = c(0.0253, 0.1125, 0.0757, 0.1058,
     0.0981, 0.0874, 0.2611, 0.0668, 0.1121, 0.1482, 0.1416, 0.0151, 0.1428, 0.1529,
     0.2587, 0.2247, 0.0671, 0.0733, 0.0385, 0.0785, 0.0795, 0.0922, 0.0744, 0.0418,
     0.0871, 0.1077, 0.1207, 0.1513, 0.0973, 0.2559, 0.1019, 0.0808, 0.1827, 0.0778,
     0.0756, 0.0649, 0.1524, 0.1425, 0.1791, 0.1589, 0.0696, 0.0771, 0.1579, 0.0921
     ), V7 = c(0.0279, 0.3322, 0.1026, 0.1023, 0.0843, 0.1021, 0.2061, 0.0609, 0.1258,
     0.2054, 0.0956, 0.0541, 0.1773, 0.1154, 0.2304, 0.0617, 0.1056, 0.0841, 0.0611,
     0.0906, 0.097, 0.1027, 0.08, 0.0723, 0.1201, 0.2363, 0.0944, 0.1316, 0.0961,
     0.2947, 0.1054, 0.209, 0.1106, 0.0931, 0.0374, 0.1209, 0.1543, 0.1463, 0.1853,
     0.1392, 0.0873, 0.0499, 0.1438, 0.1368), V8 = c(0.013, 0.459, 0.1138, 0.044,
     0.1172, 0.0852, 0.1668, 0.0131, 0.1717, 0.1605, 0.0802, 0.021, 0.2161, 0.0608,
     0.2067, 0.2287, 0.0697, 0.1031, 0.1117, 0.0908, 0.0872, 0.0613, 0.0456, 0.0828,
     0.2707, 0.2385, 0.4223, 0.1654, 0.1323, 0.411, 0.107, 0.3465, 0.1702, 0.0941,
     0.0961, 0.2467, 0.0391, 0.1219, 0.0055, 0.0987, 0.0616, 0.0906, 0.1402, 0.1078
     ), V9 = c(0.0489, 0.5526, 0.0794, 0.0931, 0.0759, 0.1136, 0.1801, 0.0899, 0.3074,
     0.2532, 0.1618, 0.0505, 0.163, 0.1317, 0.3416, 0.095, 0.0962, 0.0993, 0.1237,
     0.1151, 0.0743, 0.1465, 0.0368, 0.0494, 0.1206, 0.0075, 0.5744, 0.1864, 0.2462,
     0.4983, 0.2302, 0.5276, 0.2804, 0.1711, 0.0548, 0.3564, 0.061, 0.1697, 0.1929,
     0.0955, 0.1252, 0.1229, 0.3048, 0.1552), V10 = c(0.0874, 0.5966, 0.152, 0.0734,
     0.092, 0.1747, 0.3083, 0.0922, 0.3199, 0.2672, 0.2558, 0.1097, 0.2067, 0.137,
     0.4284, 0.074, 0.0251, 0.0802, 0.23, 0.0973, 0.0837, 0.2838, 0.125, 0.0686, 0.0279,
     0.1882, 0.5025, 0.2013, 0.2696, 0.592, 0.2259, 0.5965, 0.4432, 0.1483, 0.0193,
     0.4459, 0.0113, 0.1923, 0.2231, 0.1895, 0.1302, 0.1185, 0.3914, 0.1779), V11 = c(0.11,
     0.5304, 0.1675, 0.074, 0.1475, 0.2198, 0.3794, 0.1445, 0.2946, 0.3056, 0.3078,
     0.0841, 0.4257, 0.0843, 0.3015, 0.161, 0.0801, 0.1564, 0.137, 0.1203, 0.1579,
     0.2802, 0.2405, 0.1125, 0.2251, 0.1456, 0.3488, 0.289, 0.3412, 0.5832, 0.2373,
     0.6254, 0.5222, 0.1532, 0.0897, 0.4152, 0.1255, 0.2361, 0.2907, 0.1896, 0.0888,
     0.0775, 0.3504, 0.2164), V12 = c(0.1084, 0.2251, 0.137, 0.0622, 0.0522, 0.2721,
     0.5364, 0.1475, 0.2484, 0.3161, 0.3404, 0.0942, 0.5484, 0.0269, 0.1207, 0.2226,
     0.1056, 0.2565, 0.1335, 0.1102, 0.0898, 0.3086, 0.2325, 0.1741, 0.2615, 0.1892,
     0.17, 0.365, 0.4292, 0.5419, 0.3323, 0.4507, 0.5611, 0.11, 0.1734, 0.3952, 0.2473,
     0.2719, 0.2259, 0.2547, 0.05, 0.1101, 0.3669, 0.2568), V13 = c(0.1094, 0.2402,
     0.1361, 0.1055, 0.1119, 0.2105, 0.6173, 0.2087, 0.251, 0.2314, 0.34, 0.1204,
     0.7131, 0.1254, 0.3299, 0.2703, 0.1266, 0.2624, 0.2137, 0.1192, 0.0309, 0.2657,
     0.2523, 0.271, 0.177, 0.3176, 0.2076, 0.351, 0.3682, 0.5472, 0.3827, 0.3693,
     0.5379, 0.089, 0.1936, 0.4256, 0.3011, 0.3049, 0.3136, 0.4073, 0.0628, 0.1042,
     0.3943, 0.3089), V14 = c(0.1023, 0.2689, 0.1345, 0.1183, 0.097, 0.1727, 0.7842,
     0.2558, 0.1806, 0.2067, 0.3951, 0.042, 0.7003, 0.3046, 0.5707, 0.3365, 0.089,
     0.1179, 0.1526, 0.1762, 0.1856, 0.3801, 0.1472, 0.3087, 0.3709, 0.134, 0.3087,
     0.3495, 0.394, 0.5314, 0.484, 0.2864, 0.4048, 0.1236, 0.2803, 0.4135, 0.3747,
     0.2986, 0.3302, 0.2988, 0.1274, 0.0853, 0.3311, 0.3829), V15 = c(0.0601, 0.6646,
     0.2144, 0.1721, 0.1174, 0.204, 0.8392, 0.2603, 0.1413, 0.1804, 0.3352, 0.0031,
     0.6777, 0.5584, 0.6962, 0.4266, 0.0198, 0.0597, 0.0775, 0.239, 0.2969, 0.5626,
     0.0669, 0.3575, 0.4533, 0.2169, 0.4224, 0.4325, 0.2965, 0.4981, 0.6812, 0.1635,
     0.2245, 0.1197, 0.3313, 0.4528, 0.452, 0.2226, 0.366, 0.2901, 0.0801, 0.0456,
     0.3331, 0.4393), V16 = c(0.0906, 0.6632, 0.5354, 0.2584, 0.1678, 0.1786, 0.9016,
     0.1985, 0.3019, 0.2808, 0.2252, 0.0162, 0.7939, 0.7973, 0.9751, 0.4144, 0.1133,
     0.1563, 0.1196, 0.2138, 0.2032, 0.4376, 0.11, 0.4998, 0.5553, 0.2458, 0.5312,
     0.5398, 0.3172, 0.6985, 0.7555, 0.0422, 0.1784, 0.1145, 0.502, 0.5326, 0.5392,
     0.1745, 0.3956, 0.5326, 0.0742, 0.1304, 0.3002, 0.5335), V17 = c(0.1313, 0.1674,
     0.683, 0.3232, 0.1642, 0.1318, 1, 0.2394, 0.3635, 0.4423, 0.2086, 0.0624, 0.9382,
     0.8341, 1, 0.5655, 0.2826, 0.2241, 0.0903, 0.1929, 0.1264, 0.2617, 0.2353, 0.6011,
     0.4616, 0.2589, 0.2436, 0.6237, 0.2825, 0.8292, 0.9522, 0.1785, 0.2297, 0.2137,
     0.636, 0.7306, 0.6588, 0.2459, 0.4386, 0.4022, 0.2048, 0.269, 0.2324, 0.5996),
     V18 = c(0.2758, 0.0837, 0.56, 0.3817, 0.1205, 0.226, 0.8911, 0.3134, 0.3887,
     0.5947, 0.2248, 0.2127, 0.8925, 0.8057, 0.9293, 0.6921, 0.3234, 0.3586, 0.0689,
     0.1765, 0.1655, 0.1199, 0.3282, 0.647, 0.3797, 0.2786, 0.1884, 0.6876, 0.305,
     0.7839, 0.9826, 0.4394, 0.272, 0.2838, 0.7096, 0.6193, 0.7113, 0.31, 0.467, 0.1571,
     0.295, 0.2947, 0.1381, 0.6728), V19 = c(0.366, 0.4331, 0.3093, 0.4243, 0.0494,
     0.2358, 0.8753, 0.4077, 0.298, 0.6601, 0.3382, 0.3436, 0.9146, 0.8616, 0.621,
     0.8547, 0.3238, 0.1792, 0.2071, 0.0746, 0.1661, 0.6676, 0.4416, 0.8067, 0.345,
     0.2298, 0.1908, 0.7329, 0.2408, 0.8215, 0.8871, 0.695, 0.5209, 0.364, 0.8333,
     0.2032, 0.7602, 0.3572, 0.5255, 0.3024, 0.3193, 0.3669, 0.345, 0.7309), V20 = c(0.5269,
     0.8718, 0.3226, 0.4217, 0.1544, 0.3107, 0.7886, 0.4529, 0.2219, 0.5844, 0.4578,
     0.3813, 0.7832, 0.8769, 0.4586, 0.9234, 0.4333, 0.3256, 0.2975, 0.1265, 0.2091,
     0.9402, 0.5167, 0.9008, 0.2665, 0.0656, 0.8321, 0.8107, 0.542, 0.9363, 0.8268,
     0.8097, 0.6898, 0.543, 0.873, 0.4636, 0.8672, 0.4283, 0.3735, 0.3907, 0.4567,
     0.4948, 0.4428, 0.8092), V21 = c(0.581, 0.7992, 0.443, 0.4449, 0.3485, 0.3906,
     0.7156, 0.4893, 0.1624, 0.4539, 0.6474, 0.3825, 0.796, 0.9413, 0.5001, 0.9171,
     0.6068, 0.6079, 0.2836, 0.2005, 0.231, 0.7832, 0.6508, 0.8906, 0.2395, 0.1441,
     1, 0.8396, 0.6802, 1, 0.7561, 0.855, 0.8202, 0.6673, 0.8073, 0.4148, 0.8416,
     0.4268, 0.2243, 0.3542, 0.5959, 0.6275, 0.489, 0.8941), V22 = c(0.6181, 0.3712,
     0.5573, 0.4075, 0.6146, 0.3631, 0.7581, 0.5666, 0.1343, 0.4789, 0.6708, 0.4764,
     0.7983, 0.9403, 0.5032, 1, 0.7652, 0.6988, 0.3353, 0.1571, 0.446, 0.5352, 0.7793,
     0.9338, 0.1127, 0.1179, 0.4076, 0.8632, 0.632, 0.9224, 0.8217, 0.8717, 0.878,
     0.7979, 0.7507, 0.4292, 0.7974, 0.3735, 0.1973, 0.4438, 0.7101, 0.8162, 0.3677,
     0.9668), V23 = c(0.5875, 0.1703, 0.5782, 0.3306, 0.9146, 0.4809, 0.6372, 0.6234,
     0.2046, 0.5646, 0.7007, 0.6313, 0.7716, 0.9409, 0.7082, 0.9532, 0.9203, 0.8391,
     0.3622, 0.2605, 0.6634, 0.6809, 0.7978, 1, 0.2556, 0.1668, 0.096, 0.8747, 0.5824,
     0.7839, 0.6967, 0.8601, 0.76, 0.9273, 0.7526, 0.573, 0.8385, 0.4585, 0.4337,
     0.6414, 0.8225, 0.9237, 0.4379, 1), V24 = c(0.4639, 0.1611, 0.6173, 0.4012, 0.9364,
     0.6531, 0.321, 0.6741, 0.3791, 0.5281, 0.7619, 0.7523, 0.6615, 1, 0.842, 0.9101,
     0.9719, 0.8553, 0.3202, 0.5386, 0.6933, 0.9174, 0.7786, 0.9102, 0.5169, 0.1783,
     0.1928, 0.9607, 0.6805, 0.547, 0.6444, 0.9201, 0.7616, 0.9027, 0.7298, 0.5399,
     0.9317, 0.6094, 0.6532, 0.4601, 0.8425, 0.871, 0.4864, 0.9893), V25 = c(0.5424,
     0.2086, 0.8132, 0.4466, 0.8677, 0.7812, 0.2076, 0.8282, 0.5771, 0.7115, 0.7745,
     0.8675, 0.486, 0.9725, 0.8109, 0.8337, 0.9207, 0.771, 0.3452, 0.844, 0.7663,
     0.7613, 0.8587, 0.8496, 0.3779, 0.2476, 0.2419, 0.9716, 0.5984, 0.4562, 0.6948,
     0.8729, 0.7152, 0.9192, 0.6177, 0.3161, 0.8555, 0.7221, 0.507, 0.6009, 0.9065,
     0.8052, 0.6207, 0.9376), V26 = c(0.7367, 0.2847, 0.9819, 0.5218, 0.8772, 0.8395,
     0.2279, 0.8823, 0.7545, 1, 0.6767, 0.8788, 0.5572, 0.9309, 0.769, 0.7053, 0.7545,
     0.6215, 0.3562, 1, 0.8206, 0.822, 0.9321, 0.7867, 0.4082, 0.257, 0.379, 0.9121,
     0.8412, 0.5922, 0.8014, 0.8084, 0.7288, 1, 0.4946, 0.2285, 0.6162, 0.7595, 0.2796,
     0.869, 0.9802, 0.8756, 0.7256, 0.8991), V27 = c(0.9089, 0.2211, 0.9823, 0.7552,
     0.8553, 0.918, 0.3309, 0.9196, 0.8406, 0.9564, 0.7373, 0.7901, 0.4697, 0.9351,
     0.8105, 0.6534, 0.8289, 0.5736, 0.3892, 0.8684, 0.7049, 0.8872, 0.9454, 0.7688,
     0.5353, 0.1036, 0.2893, 0.8576, 0.9911, 0.5448, 0.6053, 0.8694, 0.8686, 0.9821,
     0.4531, 0.6995, 0.4139, 0.8706, 0.4163, 0.8345, 1, 1, 0.6624, 0.9184), V28 = c(1,
     0.6134, 0.9166, 0.9503, 0.8833, 0.9769, 0.2847, 0.8965, 0.8547, 0.609, 0.7834,
     0.8357, 0.564, 0.7317, 0.6203, 0.4483, 0.8907, 0.4402, 0.6622, 0.6742, 0.756,
     0.6091, 0.8645, 0.7718, 0.5116, 0.5356, 0.3451, 0.8798, 0.9187, 0.3971, 0.6084,
     0.8411, 0.9509, 0.9092, 0.4099, 1, 0.3269, 1, 0.595, 0.7669, 0.8752, 0.9858,
     0.7689, 0.9128), V29 = c(0.8247, 0.5807, 0.7423, 1, 1, 0.8937, 0.1949, 0.7549,
     0.9036, 0.5112, 0.9619, 0.9631, 0.4517, 0.4421, 0.2356, 0.246, 0.7309, 0.4056,
     0.9254, 0.5537, 0.7466, 0.2967, 0.722, 0.6268, 0.4544, 0.7124, 0.3777, 0.772,
     0.8005, 0.0882, 0.8877, 0.5793, 0.8348, 0.8184, 0.454, 0.7262, 0.3108, 0.9815,
     0.5242, 0.5081, 0.7583, 0.9427, 0.7981, 0.7811), V30 = c(0.5441, 0.6925, 0.7736,
     0.9084, 0.8296, 0.7022, 0.1671, 0.6736, 1, 0.4, 1, 0.9619, 0.3369, 0.3244, 0.2595,
     0.202, 0.6896, 0.4411, 1, 0.4638, 0.6387, 0.1103, 0.485, 0.4301, 0.4258, 0.6291,
     0.5213, 0.5711, 0.6713, 0.2385, 0.8557, 0.3754, 0.573, 0.6962, 0.4124, 0.4724,
     0.2554, 0.7187, 0.4178, 0.462, 0.6616, 0.8114, 0.8577, 0.6018), V31 = c(0.3349,
     0.3825, 0.8473, 0.8283, 0.6601, 0.65, 0.1025, 0.6463, 0.9646, 0.0482, 0.8086,
     0.9236, 0.2684, 0.4161, 0.6299, 0.1446, 0.5829, 0.513, 0.8528, 0.3609, 0.4846,
     0.1318, 0.1357, 0.2077, 0.3869, 0.4756, 0.2316, 0.4264, 0.5632, 0.2005, 0.5563,
     0.3485, 0.4363, 0.59, 0.3139, 0.5103, 0.3367, 0.5848, 0.3714, 0.538, 0.5786,
     0.6987, 0.9273, 0.3765), V32 = c(0.0877, 0.4303, 0.7352, 0.7571, 0.5499, 0.5069,
     0.1362, 0.5007, 0.7912, 0.1852, 0.5558, 0.8903, 0.2339, 0.4611, 0.6762, 0.0994,
     0.4935, 0.5965, 0.6297, 0.2055, 0.3328, 0.0624, 0.2951, 0.1198, 0.3939, 0.6015,
     0.3335, 0.286, 0.7332, 0.0587, 0.2897, 0.4639, 0.4289, 0.5447, 0.3194, 0.5459,
     0.4465, 0.4192, 0.2375, 0.5375, 0.5128, 0.681, 0.7009, 0.33), V33 = c(0.16, 0.7791,
     0.6671, 0.7262, 0.5716, 0.3903, 0.2212, 0.3663, 0.6412, 0.2186, 0.5409, 0.9708,
     0.3052, 0.4031, 0.2903, 0.151, 0.3101, 0.7272, 0.525, 0.162, 0.5356, 0.099, 0.4715,
     0.166, 0.4661, 0.7208, 0.4781, 0.3114, 0.6038, 0.2544, 0.3638, 0.6495, 0.424,
     0.5142, 0.3692, 0.2881, 0.5, 0.3756, 0.0863, 0.3844, 0.4776, 0.6591, 0.4851,
     0.228), V34 = c(0.4169, 0.8703, 0.6083, 0.6152, 0.6859, 0.3009, 0.1124, 0.2298,
     0.5986, 0.1436, 0.4988, 0.9647, 0.3016, 0.3, 0.4393, 0.2392, 0.0306, 0.6539,
     0.4012, 0.2092, 0.8741, 0.4006, 0.6036, 0.2618, 0.3974, 0.6234, 0.6116, 0.2066,
     0.2575, 0.2009, 0.4786, 0.6901, 0.3156, 0.5389, 0.3776, 0.0981, 0.5111, 0.3263,
     0.1437, 0.3601, 0.4994, 0.6954, 0.3409, 0.0212), V35 = c(0.6576, 1, 0.6239, 0.568,
     0.6825, 0.1565, 0.1677, 0.1362, 0.6835, 0.1757, 0.3108, 0.7892, 0.2753, 0.2459,
     0.8529, 0.4434, 0.0244, 0.5902, 0.2901, 0.31, 0.8573, 0.3666, 0.8083, 0.3862,
     0.2194, 0.5725, 0.6705, 0.1165, 0.0349, 0.0329, 0.2908, 0.5666, 0.1287, 0.5531,
     0.4469, 0.1951, 0.5194, 0.1944, 0.2896, 0.7402, 0.5197, 0.729, 0.1406, 0.1117
     ), V36 = c(0.739, 0.9212, 0.5972, 0.5757, 0.5142, 0.0985, 0.1039, 0.2123, 0.7771,
     0.1428, 0.2897, 0.5307, 0.1041, 0.1348, 0.718, 0.5023, 0.1108, 0.5393, 0.2007,
     0.2344, 0.6718, 0.105, 0.987, 0.3958, 0.1816, 0.7523, 0.7375, 0.0185, 0.1799,
     0.1547, 0.0899, 0.5188, 0.1477, 0.5318, 0.4777, 0.4181, 0.4619, 0.1394, 0.4577,
     0.7761, 0.5071, 0.668, 0.1147, 0.1788), V37 = c(0.7963, 0.9386, 0.5715, 0.5324,
     0.275, 0.22, 0.2562, 0.2395, 0.8084, 0.1644, 0.2244, 0.2718, 0.1757, 0.2541,
     0.4801, 0.4441, 0.1594, 0.4897, 0.3356, 0.1058, 0.3446, 0.1915, 0.88, 0.3248,
     0.1023, 0.8712, 0.7356, 0.1302, 0.3039, 0.1212, 0.2043, 0.506, 0.2062, 0.4826,
     0.4716, 0.4604, 0.4234, 0.167, 0.3725, 0.3858, 0.4577, 0.5917, 0.1433, 0.2373
     ), V38 = c(0.7493, 0.9303, 0.5242, 0.3672, 0.1358, 0.2243, 0.2624, 0.2673, 0.7426,
     0.3089, 0.096, 0.1953, 0.3156, 0.2255, 0.5856, 0.4571, 0.1371, 0.4081, 0.4799,
     0.0383, 0.315, 0.393, 0.6411, 0.2302, 0.2108, 0.9252, 0.7792, 0.248, 0.476, 0.2446,
     0.1707, 0.3885, 0.24, 0.379, 0.4664, 0.3217, 0.4372, 0.1275, 0.3372, 0.0667,
     0.3505, 0.4899, 0.182, 0.2843), V39 = c(0.6795, 0.7314, 0.2924, 0.1669, 0.1551,
     0.2736, 0.2236, 0.2865, 0.6295, 0.3648, 0.2287, 0.1374, 0.3603, 0.1598, 0.4993,
     0.3927, 0.0696, 0.4145, 0.6147, 0.0528, 0.2702, 0.4288, 0.4276, 0.325, 0.3253,
     0.9709, 0.6788, 0.1637, 0.5756, 0.3171, 0.0407, 0.3762, 0.5173, 0.1831, 0.3893,
     0.2828, 0.4277, 0.1666, 0.3803, 0.3684, 0.1845, 0.3439, 0.3605, 0.2241), V40 = c(0.4713,
     0.4791, 0.1536, 0.0866, 0.2646, 0.2152, 0.118, 0.206, 0.5708, 0.4441, 0.3228,
     0.3105, 0.2736, 0.1485, 0.2866, 0.29, 0.0452, 0.6003, 0.6246, 0.1291, 0.2598,
     0.2546, 0.2702, 0.4022, 0.3697, 0.9297, 0.5259, 0.1103, 0.4254, 0.3195, 0.1286,
     0.3738, 0.5168, 0.175, 0.4255, 0.243, 0.4433, 0.2574, 0.4181, 0.6114, 0.189,
     0.2366, 0.5529, 0.2715), V41 = c(0.2355, 0.2087, 0.2003, 0.0646, 0.1994, 0.2438,
     0.1103, 0.1659, 0.4433, 0.3859, 0.3454, 0.379, 0.1301, 0.0845, 0.0601, 0.3408,
     0.062, 0.7196, 0.4973, 0.2241, 0.2742, 0.1151, 0.2642, 0.4344, 0.2912, 0.8995,
     0.2762, 0.2144, 0.5046, 0.3051, 0.1581, 0.2605, 0.1491, 0.1679, 0.4064, 0.1979,
     0.37, 0.2258, 0.3603, 0.351, 0.1967, 0.1716, 0.5988, 0.3363), V42 = c(0.1704,
     0.2016, 0.2031, 0.1891, 0.1883, 0.3154, 0.2831, 0.2633, 0.3361, 0.2813, 0.3882,
     0.4105, 0.2458, 0.0569, 0.1167, 0.499, 0.1421, 0.6633, 0.3492, 0.1915, 0.3594,
     0.2196, 0.3342, 0.4008, 0.301, 0.7911, 0.1545, 0.2033, 0.7179, 0.0836, 0.2191,
     0.1591, 0.2407, 0.0674, 0.3712, 0.2444, 0.3324, 0.2777, 0.2711, 0.2312, 0.1041,
     0.1013, 0.5077, 0.2546), V43 = c(0.2728, 0.1669, 0.2207, 0.2683, 0.2746, 0.2112,
     0.2385, 0.2552, 0.3795, 0.1238, 0.324, 0.3355, 0.3404, 0.0855, 0.2737, 0.3632,
     0.1597, 0.6287, 0.2662, 0.1587, 0.4382, 0.1879, 0.4335, 0.337, 0.2563, 0.56,
     0.2019, 0.1887, 0.6163, 0.1266, 0.1701, 0.1875, 0.3415, 0.0609, 0.3863, 0.1847,
     0.2564, 0.1613, 0.1653, 0.2195, 0.055, 0.0766, 0.5512, 0.1867), V44 = c(0.4016,
     0.2872, 0.1778, 0.2887, 0.1651, 0.0991, 0.0255, 0.1696, 0.495, 0.0953, 0.0926,
     0.2998, 0.1753, 0.1262, 0.2812, 0.1387, 0.1384, 0.4087, 0.3137, 0.0942, 0.246,
     0.1437, 0.4542, 0.2518, 0.1927, 0.2838, 0.2231, 0.137, 0.5663, 0.1381, 0.0971,
     0.2267, 0.4494, 0.0375, 0.2802, 0.0841, 0.2527, 0.1335, 0.1951, 0.3051, 0.0492,
     0.0845, 0.5027, 0.216), V45 = c(0.4125, 0.4374, 0.1353, 0.2341, 0.0575, 0.0594,
     0.1967, 0.1467, 0.4373, 0.1201, 0.1173, 0.2748, 0.0679, 0.1153, 0.2078, 0.18,
     0.0372, 0.3212, 0.4282, 0.084, 0.0758, 0.2146, 0.396, 0.2101, 0.2062, 0.4407,
     0.4221, 0.1376, 0.5749, 0.1136, 0.2217, 0.1577, 0.4624, 0.0533, 0.1283, 0.0692,
     0.2137, 0.1976, 0.2811, 0.1937, 0.0622, 0.026, 0.7034, 0.1278), V46 = c(0.347,
     0.3097, 0.1373, 0.1668, 0.0695, 0.194, 0.1483, 0.1286, 0.2404, 0.0825, 0.0566,
     0.2024, 0.1062, 0.057, 0.066, 0.1299, 0.0688, 0.2518, 0.4262, 0.067, 0.0187,
     0.236, 0.2525, 0.1181, 0.1751, 0.5507, 0.3067, 0.0307, 0.3593, 0.0516, 0.2732,
     0.1211, 0.2001, 0.0278, 0.1117, 0.0528, 0.1789, 0.1234, 0.2246, 0.157, 0.0505,
     0.0333, 0.5904, 0.0768), V47 = c(0.2739, 0.1578, 0.0749, 0.1015, 0.0598, 0.1937,
     0.0434, 0.0926, 0.1128, 0.0618, 0.0766, 0.1043, 0.0643, 0.0426, 0.0491, 0.0523,
     0.0867, 0.1482, 0.3511, 0.0342, 0.0797, 0.1125, 0.1084, 0.115, 0.0841, 0.4331,
     0.1329, 0.0373, 0.2526, 0.0073, 0.1874, 0.0883, 0.0775, 0.0179, 0.1303, 0.0357,
     0.101, 0.1554, 0.1921, 0.0479, 0.0247, 0.0205, 0.4069, 0.107), V48 = c(0.179,
     0.0553, 0.0472, 0.1195, 0.0456, 0.1082, 0.0627, 0.0716, 0.1654, 0.0141, 0.0969,
     0.0453, 0.0532, 0.0425, 0.0345, 0.0817, 0.0513, 0.0988, 0.2458, 0.0469, 0.0748,
     0.0254, 0.0372, 0.055, 0.1035, 0.2905, 0.1349, 0.0606, 0.2299, 0.0278, 0.1062,
     0.085, 0.1232, 0.0114, 0.0787, 0.0085, 0.0528, 0.1057, 0.15, 0.0538, 0.0219,
     0.0309, 0.2761, 0.0946), V49 = c(0.0922, 0.0334, 0.0325, 0.0704, 0.0021, 0.0336,
     0.0513, 0.0325, 0.0933, 0.0108, 0.0588, 0.0337, 0.0531, 0.0235, 0.0172, 0.0469,
     0.0092, 0.0317, 0.1259, 0.0357, 0.0367, 0.0285, 0.0286, 0.0293, 0.0641, 0.1981,
     0.1057, 0.0399, 0.1271, 0.0372, 0.0665, 0.0355, 0.0783, 0.0073, 0.0436, 0.023,
     0.0453, 0.049, 0.0665, 0.0146, 0.0102, 0.0101, 0.1584, 0.0636), V50 = c(0.0276,
     0.0209, 0.0179, 0.0167, 0.0068, 0.0177, 0.0473, 0.0258, 0.0225, 0.0124, 0.005,
     0.0122, 0.0272, 6e-04, 0.0287, 0.0114, 0.0198, 0.0269, 0.0327, 0.0136, 0.0155,
     0.0178, 0.0099, 0.0183, 0.0153, 0.0779, 0.0499, 0.0169, 0.0356, 0.0121, 0.0405,
     0.0219, 0.0089, 0.0116, 0.0224, 0.0046, 0.0118, 0.0097, 0.0193, 0.0068, 0.0047,
     0.0095, 0.051, 0.0227), V51 = c(0.0169, 0.0172, 0.0045, 0.0107, 0.0036, 0.0209,
     0.0248, 0.0136, 0.0214, 0.0104, 0.0118, 0.0072, 0.0171, 0.0188, 0.0027, 0.0299,
     0.0118, 0.0066, 0.0181, 0.0082, 0.03, 0.0052, 0.0046, 0.0104, 0.0081, 0.0396,
     0.0206, 0.0135, 0.0367, 0.0153, 0.0113, 0.0086, 0.0249, 0.0092, 0.0133, 0.0156,
     9e-04, 0.0223, 0.0156, 0.0187, 0.0019, 0.0047, 0.0054, 0.0128), V52 = c(0.0081,
     0.018, 0.0084, 0.0091, 0.0022, 0.0134, 0.0274, 0.0044, 0.0221, 0.0095, 0.0146,
     0.0108, 0.0118, 0.0127, 0.0208, 0.0244, 0.009, 8e-04, 0.0217, 0.014, 0.0112,
     0.0081, 0.0094, 0.0117, 0.0191, 0.0173, 0.0073, 0.0222, 0.0176, 0.0092, 0.0028,
     0.0123, 0.0204, 0.0078, 0.0078, 0.0031, 0.0142, 0.0121, 0.0362, 0.0059, 0.0041,
     0.0072, 0.0078, 0.0173), V53 = c(0.004, 0.011, 0.001, 0.0016, 0.0032, 0.0094,
     0.0205, 0.0028, 0.0152, 0.0151, 0.004, 0.007, 0.0129, 0.0081, 0.0048, 0.0199,
     0.0223, 0.0045, 0.0038, 0.0044, 0.0112, 0.012, 0.0048, 0.0101, 0.0182, 0.0149,
     0.0081, 0.0175, 0.0035, 0.0035, 0.0036, 0.006, 0.0059, 0.0041, 0.0174, 0.0054,
     0.0179, 0.0108, 0.021, 0.0095, 0.0074, 0.0054, 0.0201, 0.0135), V54 = c(0.0025,
     0.0234, 0.0018, 0.0084, 0.006, 0.0047, 0.0141, 0.0021, 0.0083, 0.0059, 0.0114,
     0.0063, 0.0344, 0.0067, 0.0199, 0.0257, 0.0179, 0.0024, 0.0019, 0.0052, 0.0102,
     0.0045, 0.0047, 0.0061, 0.016, 0.0115, 0.0303, 0.0127, 0.0093, 0.0098, 0.0105,
     0.0187, 0.0053, 0.0013, 0.0176, 0.0105, 0.0079, 0.0057, 0.0154, 0.0194, 0.003,
     0.0022, 0.0104, 0.0114), V55 = c(0.0036, 0.0276, 0.0068, 0.0064, 0.0054, 0.0045,
     0.0185, 0.0022, 0.0058, 0.0015, 0.0032, 0.003, 0.0065, 0.0043, 0.0126, 0.0082,
     0.0084, 6e-04, 0.0065, 0.0073, 0.0026, 0.0121, 0.0016, 0.0031, 0.029, 0.0202,
     0.019, 0.0022, 0.0121, 0.0121, 0.012, 0.0111, 0.0079, 0.0011, 0.0038, 0.011,
     0.006, 0.0028, 0.018, 0.008, 0.005, 0.0016, 0.0039, 0.0062), V56 = c(0.0058,
     0.0032, 0.0039, 0.0026, 0.0063, 0.0042, 0.0055, 0.0048, 0.0023, 0.0053, 0.0062,
     0.0011, 0.0067, 0.0065, 0.0022, 0.0151, 0.0068, 0.0073, 0.0132, 0.0021, 0.0097,
     0.0097, 8e-04, 0.0099, 0.009, 0.0139, 0.0212, 0.0124, 0.0075, 6e-04, 0.0087,
     0.0126, 0.0037, 0.0045, 0.0129, 0.0015, 0.0131, 0.0079, 0.0013, 0.0152, 0.0048,
     0.0029, 0.0031, 0.0157), V57 = c(0.0067, 0.0084, 0.012, 0.0029, 0.0143, 0.0028,
     0.0045, 0.0138, 0.0057, 0.0016, 0.0101, 7e-04, 0.0022, 0.0049, 0.0037, 0.0171,
     0.0032, 0.0096, 0.0108, 0.0047, 0.0098, 0.0085, 0.0042, 0.008, 0.0242, 0.0029,
     0.0126, 0.0054, 0.0056, 0.0181, 0.0061, 0.0081, 0.0015, 0.0039, 0.0066, 0.0072,
     0.0089, 0.0034, 0.0106, 0.0158, 0.0017, 0.0058, 0.0062, 0.0088), V58 = c(0.0035,
     0.0122, 0.0132, 0.0037, 0.0132, 0.0036, 0.0115, 0.014, 0.0052, 0.0042, 0.0068,
     0.0024, 0.0079, 0.0054, 0.0034, 0.0146, 0.0035, 0.0054, 0.005, 0.0024, 0.0043,
     0.0047, 0.0024, 0.0107, 0.0224, 0.016, 0.0201, 0.0021, 0.0021, 0.0094, 0.0061,
     0.0155, 0.0056, 0.0022, 0.0044, 0.0048, 0.0084, 0.0046, 0.0127, 0.0053, 0.0041,
     0.005, 0.0087, 0.0036), V59 = c(0.0043, 0.0082, 0.007, 0.007, 0.0051, 0.0013,
     0.0152, 0.0028, 0.0027, 0.0053, 0.0053, 0.0057, 0.0146, 0.0073, 0.0114, 0.0134,
     0.0056, 0.0085, 0.0085, 9e-04, 0.0071, 0.0048, 0.0027, 0.0161, 0.019, 0.0106,
     0.021, 0.0028, 0.0043, 0.0116, 0.003, 0.016, 0.0067, 0.0023, 0.0134, 0.0107,
     0.0113, 0.0022, 0.0178, 0.0189, 0.0086, 0.0024, 0.007, 0.0053), V60 = c(0.0033,
     0.0143, 0.0088, 0.0041, 0.0041, 0.0016, 0.01, 0.0064, 0.0021, 0.0074, 0.0087,
     0.0044, 0.0051, 0.0054, 0.0077, 0.0056, 0.004, 0.006, 0.0044, 0.0017, 0.0108,
     0.0053, 0.0041, 0.0133, 0.0096, 0.0134, 0.0041, 0.0023, 0.0017, 0.0063, 0.0078,
     0.0085, 0.0054, 0.0016, 0.0092, 0.0094, 0.0049, 0.0021, 0.0231, 0.0102, 0.0058,
     0.003, 0.0042, 0.003)), class = "data.frame", row.names = c("168", "135", "19",
     "24", "38", "194", "131", "55", "160", "174", "180", "26", "130", "76", "91", "112",
     "10", "48", "167", "70", "40", "8", "171", "111", "100", "23", "133", "120", "182",
     "20", "83", "172", "181", "61", "107", "5", "108", "190", "99", "14", "60", "63",
     "187", "122")), s = 0.0251183277992532)
     42: predictLearner(.learner, .model, .newdata, ...)
     43: predictLearner.classif.glmnet(.learner, .model, .newdata, ...)
     44: predict(.model$learner.model, newx = .newdata, type = "response", ...)
     45: predict.lognet(.model$learner.model, newx = .newdata, type = "response", ...)
     46: NextMethod("predict")
     47: predict.glmnet(.model$learner.model, newx = .newdata, type = "response", ...)
     48: lambda.interp(lambda, s)
     49: approx(lambda, seq(lambda), sfrac)
     50: stop("need at least two non-NA values to interpolate")
    
     ── 5. Error: TuneWrapper with glmnet (#958) (@test_base_TuneWrapper.R#118) ────
     need at least two non-NA values to interpolate
     1: train(lrn2, multiclass.task) at testthat/test_base_TuneWrapper.R:118
     2: measureTime(fun1({
     learner.model = fun2(fun3(do.call(trainLearner, pars)))
     }))
     3: force(expr)
     4: fun1({
     learner.model = fun2(fun3(do.call(trainLearner, pars)))
     })
     5: fun2(fun3(do.call(trainLearner, pars)))
     6: fun3(do.call(trainLearner, pars))
     7: do.call(trainLearner, pars)
     8: (function (.learner, .task, .subset, .weights = NULL, ...)
     {
     UseMethod("trainLearner")
     })(.learner = structure(list(id = "classif.glmnet.tuned", type = "classif", package = "glmnet",
     properties = NULL, par.set = structure(list(pars = structure(list(), .Names = character(0)),
     forbidden = NULL), class = "ParamSet"), par.vals = structure(list(), .Names = character(0)),
     predict.type = "response", fix.factors.prediction = FALSE, next.learner = structure(list(
     id = "classif.glmnet", type = "classif", package = "glmnet", properties = c("numerics",
     "factors", "prob", "twoclass", "multiclass", "weights"), par.set = structure(list(
     pars = list(alpha = structure(list(id = "alpha", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), s = structure(list(id = "s", type = "numeric", len = 1L, lower = 0,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "predict"), class = c("LearnerParam", "Param")), exact = structure(list(
     id = "exact", type = "logical", len = 1L, lower = NULL, upper = NULL,
     values = list(`TRUE` = TRUE, `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = FALSE, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "predict"), class = c("LearnerParam",
     "Param")), nlambda = structure(list(id = "nlambda", type = "integer",
     len = 1L, lower = 1L, upper = Inf, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = TRUE, default = 100L, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), lambda.min.ratio = structure(list(id = "lambda.min.ratio",
     type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), lambda = structure(list(
     id = "lambda", type = "numericvector", len = NA_integer_, lower = 0,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), standardize = structure(list(
     id = "standardize", type = "logical", len = 1L, lower = NULL, upper = NULL,
     values = list(`TRUE` = TRUE, `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = TRUE, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), intercept = structure(list(id = "intercept", type = "logical",
     len = 1L, lower = NULL, upper = NULL, values = list(`TRUE` = TRUE,
     `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = TRUE, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), thresh = structure(list(
     id = "thresh", type = "numeric", len = 1L, lower = 0, upper = Inf,
     values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 1e-07, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), dfmax = structure(list(
     id = "dfmax", type = "integer", len = 1L, lower = 0L, upper = Inf,
     values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), pmax = structure(list(
     id = "pmax", type = "integer", len = 1L, lower = 0L, upper = Inf,
     values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), exclude = structure(list(
     id = "exclude", type = "integervector", len = NA_integer_, lower = 1L,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), penalty.factor = structure(list(
     id = "penalty.factor", type = "numericvector", len = NA_integer_,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), lower.limits = structure(list(id = "lower.limits", type = "numericvector",
     len = NA_integer_, lower = -Inf, upper = 0, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), upper.limits = structure(list(id = "upper.limits", type = "numericvector",
     len = NA_integer_, lower = 0, upper = Inf, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), maxit = structure(list(id = "maxit", type = "integer", len = 1L,
     lower = 1L, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 100000L, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), type.logistic = structure(list(id = "type.logistic", type = "discrete",
     len = 1L, lower = NULL, upper = NULL, values = list(Newton = "Newton",
     modified.Newton = "modified.Newton"), cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), type.multinomial = structure(list(id = "type.multinomial",
     type = "discrete", len = 1L, lower = NULL, upper = NULL, values = list(
     ungrouped = "ungrouped", grouped = "grouped"), cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), fdev = structure(list(id = "fdev", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-05, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), devmax = structure(list(id = "devmax", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 0.999, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), eps = structure(list(id = "eps", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-06, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), big = structure(list(id = "big", type = "numeric", len = 1L,
     lower = -Inf, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 9.9e+35, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), mnlam = structure(list(id = "mnlam", type = "integer", len = 1L,
     lower = 1, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 5, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), pmin = structure(list(id = "pmin", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-09, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), exmx = structure(list(id = "exmx", type = "numeric", len = 1L,
     lower = -Inf, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 250, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), prec = structure(list(id = "prec", type = "numeric", len = 1L,
     lower = -Inf, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-10, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), mxit = structure(list(id = "mxit", type = "integer", len = 1L,
     lower = 1L, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 100L, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param"))), forbidden = NULL), class = c("LearnerParamSet", "ParamSet"
     )), par.vals = list(s = 0.01), predict.type = "response", name = "GLM with Lasso or Elasticnet Regularization",
     short.name = "glmnet", note = "The family parameter is set to `binomial` for two-class problems and to `multinomial` otherwise.\n Factors automatically get converted to dummy columns, ordered factors to integer.\n Parameter `s` (value of the regularization parameter used for predictions) is set to `0.1` by default,\n but needs to be tuned by the user.\n glmnet uses a global control object for its parameters. mlr resets all control parameters to their defaults\n before setting the specified parameters and after training.\n If you are setting glmnet.control parameters through glmnet.control,\n you need to save and re-set them after running the glmnet learner.",
     callees = c("glmnet", "glmnet.control", "predict.glmnet"), help.list = list(
     s = "Argument of: glmnet::predict.glmnet\n\nValue(s) of the penalty parameter lambda at which predictions are required. Default is the entire sequence used to create the model.",
     exact = "Argument of: glmnet::predict.glmnet\n\nThis argument is relevant only when predictions are made at values of s (lambda) different from those used in the fitting of the original model. If exact=FALSE (default), then the predict function uses linear interpolation to make predictions for values of s (lambda) that do not coincide with those used in the fitting algorithm. While this is often a good approximation, it can sometimes be a bit coarse. With exact=TRUE, these different values of s are merged (and sorted) with object$lambda, and the model is refit before predictions are made. In this case, it is required to supply the original data x= and y= as additional named arguments to predict() or coef(). The workhorse predict.glmnet() needs to update the model, and so needs the data used to create it. The same is true of weights, offset, penalty.factor, lower.limits, upper.limits if these were used in the original call. Failure to do so will result in an error.",
     fdev = "Argument of: glmnet::glmnet.control\n\nminimum fractional change in deviance for stopping path; factory default = 1.0e-5",
     devmax = "Argument of: glmnet::glmnet.control\n\nmaximum fraction of explained deviance for stopping path; factory default = 0.999",
     eps = "Argument of: glmnet::glmnet.control\n\nminimum value of lambda.min.ratio (see glmnet); factory default= 1.0e-6",
     big = "Argument of: glmnet::glmnet.control\n\nlarge floating point number; factory default = 9.9e35. Inf in definition of upper.limit is set to big",
     mnlam = "Argument of: glmnet::glmnet.control\n\nminimum number of path points (lambda values) allowed; factory default = 5",
     pmin = "Argument of: glmnet::glmnet.control\n\nminimum probability for any class. factory default = 1.0e-9. Note that this implies a pmax of 1-pmin.",
     exmx = "Argument of: glmnet::glmnet.control\n\nmaximum allowed exponent. factory default = 250.0",
     prec = "Argument of: glmnet::glmnet.control\n\nconvergence threshold for multi response bounds adjustment solution. factory default = 1.0e-10",
     mxit = "Argument of: glmnet::glmnet.control\n\nmaximum iterations for multiresponse bounds adjustment solution. factory default = 100",
     alpha = "Argument of: glmnet::glmnet\n\nThe elasticnet mixing parameter, with 0≤α≤ 1. The penalty is defined as (1-α)/2||β||_2^2+α||β||_1. alpha=1 is the lasso penalty, and alpha=0 the ridge penalty.",
     nlambda = "Argument of: glmnet::glmnet\n\nThe number of lambda values - default is 100.",
     lambda.min.ratio = "Argument of: glmnet::glmnet\n\nSmallest value for lambda, as a fraction of lambda.max, the (data derived) entry value (i.e. the smallest value for which all coefficients are zero). The default depends on the sample size nobs relative to the number of variables nvars. If nobs > nvars, the default is 0.0001, close to zero. If nobs < nvars, the default is 0.01. A very small value of lambda.min.ratio will lead to a saturated fit in the nobs < nvars case. This is undefined for \"binomial\" and \"multinomial\" models, and glmnet will exit gracefully when the percentage deviance explained is almost 1.",
     lambda = "Argument of: glmnet::glmnet\n\nA user supplied lambda sequence. Typical usage is to have the program compute its own lambda sequence based on nlambda and lambda.min.ratio. Supplying a value of lambda overrides this. WARNING: use with care. Avoid supplying a single value for lambda (for predictions after CV use predict() instead). Supply instead a decreasing sequence of lambda values. glmnet relies on its warms starts for speed, and its often faster to fit a whole path than compute a single fit.",
     standardize = "Argument of: glmnet::glmnet\n\nLogical flag for x variable standardization, prior to fitting the model sequence. The coefficients are always returned on the original scale. Default is standardize=TRUE. If variables are in the same units already, you might not wish to standardize. See details below for y standardization with family=\"gaussian\".",
     intercept = "Argument of: glmnet::glmnet\n\nShould intercept(s) be fitted (default=TRUE) or set to zero (FALSE)",
     thresh = "Argument of: glmnet::glmnet\n\nConvergence threshold for coordinate descent. Each inner coordinate-descent loop continues until the maximum change in the objective after any coefficient update is less than thresh times the null deviance. Defaults value is 1E-7.",
     dfmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables in the model. Useful for very large nvars, if a partial path is desired.",
     pmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables ever to be nonzero",
     exclude = "Argument of: glmnet::glmnet\n\nIndices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor (next item).",
     penalty.factor = "Argument of: glmnet::glmnet\n\nSeparate penalty factors can be applied to each coefficient. This is a number that multiplies lambda to allow differential shrinkage. Can be 0 for some variables, which implies no shrinkage, and that variable is always included in the model. Default is 1 for all variables (and implicitly infinity for variables listed in exclude). Note: the penalty factors are internally rescaled to sum to nvars, and the lambda sequence will reflect this change.",
     lower.limits = "Argument of: glmnet::glmnet\n\nVector of lower limits for each coefficient; default -Inf. Each of these must be non-positive. Can be presented as a single value (which will then be replicated), else a vector of length nvars",
     upper.limits = "Argument of: glmnet::glmnet\n\nVector of upper limits for each coefficient; default Inf. See lower.limits",
     maxit = "Argument of: glmnet::glmnet\n\nMaximum number of passes over the data for all lambda values; default is 10^5.",
     type.logistic = "Argument of: glmnet::glmnet\n\nIf \"Newton\" then the exact hessian is used (default), while \"modified.Newton\" uses an upper-bound on the hessian, and can be faster.",
     type.multinomial = "Argument of: glmnet::glmnet\n\nIf \"grouped\" then a grouped lasso penalty is used on the multinomial coefficients for a variable. This ensures they are all in our out together. The default is \"ungrouped\""),
     config = list(), fix.factors.prediction = FALSE), class = c("classif.glmnet",
     "RLearnerClassif", "RLearner", "Learner")), model.subclass = "TuneModel", resampling = structure(list(
     split = 0.666666666666667, id = "holdout", iters = 1L, predict = "test",
     stratify = FALSE), class = c("HoldoutDesc", "ResampleDesc")), measures = list(
     structure(list(id = "mmce", minimize = TRUE, properties = c("classif", "classif.multi",
     "req.pred", "req.truth"), fun = function (task, model, pred, feats, extra.args)
     {
     measureMMCE(pred$data$truth, pred$data$response)
     }, extra.args = list(), best = 0, worst = 1, name = "Mean misclassification error",
     note = "Defined as: mean(response != truth)", aggr = structure(list(id = "test.mean",
     name = "Test mean", fun = function (task, perf.test, perf.train,
     measure, group, pred)
     mean(perf.test), properties = "req.test"), class = "Aggregation")), class = "Measure")),
     opt.pars = structure(list(pars = list(alpha = structure(list(id = "alpha", type = "numeric",
     len = 1L, lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     ))), forbidden = NULL), class = c("LearnerParamSet", "ParamSet")), bit.names = character(0),
     bits.to.features = function ()
     {
     }, control = structure(list(same.resampling.instance = TRUE, impute.val = NULL,
     tune.threshold = FALSE, tune.threshold.args = list(), log.fun = function (learner,
     task, resampling, measures, par.set, control, opt.path, dob, x, y, remove.nas,
     stage, prev.stage)
     {
     x.string = paramValueToString(par.set, x, show.missing.values = !remove.nas)
     if (inherits(learner, "ModelMultiplexer"))
     x.string = stri_replace_all(x.string, "", regex = stri_paste(x$selected.learner,
     "\\."))
     logFunDefault(learner, task, resampling, measures, par.set, control,
     opt.path, dob, x.string, y, remove.nas, stage, prev.stage, prefixes = c("Tune-x",
     "Tune-y"))
     }, final.dw.perc = NULL, extra.args = list(maxit = 100L), budget = 100L), class = c("TuneControlRandom",
     "TuneControl", "OptControl")), show.info = FALSE), class = c("TuneWrapper", "OptWrapper",
     "BaseWrapper", "Learner")), .task = structure(list(type = "classif", env = <environment>,
     weights = NULL, blocking = NULL, coordinates = NULL, task.desc = structure(list(
     id = "multiclass", type = "classif", target = "Species", size = 150L, n.feat = c(numerics = 4L,
     factors = 0L, ordered = 0L, functionals = 0L), has.missings = FALSE, has.weights = FALSE,
     has.blocking = FALSE, has.coordinates = FALSE, class.levels = c("setosa",
     "versicolor", "virginica"), positive = NA_character_, negative = NA_character_,
     class.distribution = structure(c(setosa = 50L, versicolor = 50L, virginica = 50L
     ), .Dim = 3L, .Dimnames = structure(list(c("setosa", "versicolor", "virginica"
     )), .Names = ""), class = "table")), class = c("ClassifTaskDesc", "SupervisedTaskDesc",
     "TaskDesc"))), class = c("ClassifTask", "SupervisedTask", "Task")), .subset = NULL)
     9: trainLearner.TuneWrapper(.learner = structure(list(id = "classif.glmnet.tuned", type = "classif",
     package = "glmnet", properties = NULL, par.set = structure(list(pars = structure(list(), .Names = character(0)),
     forbidden = NULL), class = "ParamSet"), par.vals = structure(list(), .Names = character(0)),
     predict.type = "response", fix.factors.prediction = FALSE, next.learner = structure(list(
     id = "classif.glmnet", type = "classif", package = "glmnet", properties = c("numerics",
     "factors", "prob", "twoclass", "multiclass", "weights"), par.set = structure(list(
     pars = list(alpha = structure(list(id = "alpha", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), s = structure(list(id = "s", type = "numeric", len = 1L, lower = 0,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "predict"), class = c("LearnerParam", "Param")), exact = structure(list(
     id = "exact", type = "logical", len = 1L, lower = NULL, upper = NULL,
     values = list(`TRUE` = TRUE, `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = FALSE, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "predict"), class = c("LearnerParam",
     "Param")), nlambda = structure(list(id = "nlambda", type = "integer",
     len = 1L, lower = 1L, upper = Inf, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = TRUE, default = 100L, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), lambda.min.ratio = structure(list(id = "lambda.min.ratio",
     type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), lambda = structure(list(
     id = "lambda", type = "numericvector", len = NA_integer_, lower = 0,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), standardize = structure(list(
     id = "standardize", type = "logical", len = 1L, lower = NULL, upper = NULL,
     values = list(`TRUE` = TRUE, `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = TRUE, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), intercept = structure(list(id = "intercept", type = "logical",
     len = 1L, lower = NULL, upper = NULL, values = list(`TRUE` = TRUE,
     `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = TRUE, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), thresh = structure(list(
     id = "thresh", type = "numeric", len = 1L, lower = 0, upper = Inf,
     values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 1e-07, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), dfmax = structure(list(
     id = "dfmax", type = "integer", len = 1L, lower = 0L, upper = Inf,
     values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), pmax = structure(list(
     id = "pmax", type = "integer", len = 1L, lower = 0L, upper = Inf,
     values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), exclude = structure(list(
     id = "exclude", type = "integervector", len = NA_integer_, lower = 1L,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), penalty.factor = structure(list(
     id = "penalty.factor", type = "numericvector", len = NA_integer_,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), lower.limits = structure(list(id = "lower.limits", type = "numericvector",
     len = NA_integer_, lower = -Inf, upper = 0, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), upper.limits = structure(list(id = "upper.limits", type = "numericvector",
     len = NA_integer_, lower = 0, upper = Inf, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), maxit = structure(list(id = "maxit", type = "integer", len = 1L,
     lower = 1L, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 100000L, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), type.logistic = structure(list(id = "type.logistic", type = "discrete",
     len = 1L, lower = NULL, upper = NULL, values = list(Newton = "Newton",
     modified.Newton = "modified.Newton"), cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), type.multinomial = structure(list(id = "type.multinomial",
     type = "discrete", len = 1L, lower = NULL, upper = NULL, values = list(
     ungrouped = "ungrouped", grouped = "grouped"), cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), fdev = structure(list(id = "fdev", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-05, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), devmax = structure(list(id = "devmax", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 0.999, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), eps = structure(list(id = "eps", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-06, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), big = structure(list(id = "big", type = "numeric", len = 1L,
     lower = -Inf, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 9.9e+35, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), mnlam = structure(list(id = "mnlam", type = "integer", len = 1L,
     lower = 1, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 5, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), pmin = structure(list(id = "pmin", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-09, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), exmx = structure(list(id = "exmx", type = "numeric", len = 1L,
     lower = -Inf, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 250, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), prec = structure(list(id = "prec", type = "numeric", len = 1L,
     lower = -Inf, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-10, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), mxit = structure(list(id = "mxit", type = "integer", len = 1L,
     lower = 1L, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 100L, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param"))), forbidden = NULL), class = c("LearnerParamSet", "ParamSet"
     )), par.vals = list(s = 0.01), predict.type = "response", name = "GLM with Lasso or Elasticnet Regularization",
     short.name = "glmnet", note = "The family parameter is set to `binomial` for two-class problems and to `multinomial` otherwise.\n Factors automatically get converted to dummy columns, ordered factors to integer.\n Parameter `s` (value of the regularization parameter used for predictions) is set to `0.1` by default,\n but needs to be tuned by the user.\n glmnet uses a global control object for its parameters. mlr resets all control parameters to their defaults\n before setting the specified parameters and after training.\n If you are setting glmnet.control parameters through glmnet.control,\n you need to save and re-set them after running the glmnet learner.",
     callees = c("glmnet", "glmnet.control", "predict.glmnet"), help.list = list(
     s = "Argument of: glmnet::predict.glmnet\n\nValue(s) of the penalty parameter lambda at which predictions are required. Default is the entire sequence used to create the model.",
     exact = "Argument of: glmnet::predict.glmnet\n\nThis argument is relevant only when predictions are made at values of s (lambda) different from those used in the fitting of the original model. If exact=FALSE (default), then the predict function uses linear interpolation to make predictions for values of s (lambda) that do not coincide with those used in the fitting algorithm. While this is often a good approximation, it can sometimes be a bit coarse. With exact=TRUE, these different values of s are merged (and sorted) with object$lambda, and the model is refit before predictions are made. In this case, it is required to supply the original data x= and y= as additional named arguments to predict() or coef(). The workhorse predict.glmnet() needs to update the model, and so needs the data used to create it. The same is true of weights, offset, penalty.factor, lower.limits, upper.limits if these were used in the original call. Failure to do so will result in an error.",
     fdev = "Argument of: glmnet::glmnet.control\n\nminimum fractional change in deviance for stopping path; factory default = 1.0e-5",
     devmax = "Argument of: glmnet::glmnet.control\n\nmaximum fraction of explained deviance for stopping path; factory default = 0.999",
     eps = "Argument of: glmnet::glmnet.control\n\nminimum value of lambda.min.ratio (see glmnet); factory default= 1.0e-6",
     big = "Argument of: glmnet::glmnet.control\n\nlarge floating point number; factory default = 9.9e35. Inf in definition of upper.limit is set to big",
     mnlam = "Argument of: glmnet::glmnet.control\n\nminimum number of path points (lambda values) allowed; factory default = 5",
     pmin = "Argument of: glmnet::glmnet.control\n\nminimum probability for any class. factory default = 1.0e-9. Note that this implies a pmax of 1-pmin.",
     exmx = "Argument of: glmnet::glmnet.control\n\nmaximum allowed exponent. factory default = 250.0",
     prec = "Argument of: glmnet::glmnet.control\n\nconvergence threshold for multi response bounds adjustment solution. factory default = 1.0e-10",
     mxit = "Argument of: glmnet::glmnet.control\n\nmaximum iterations for multiresponse bounds adjustment solution. factory default = 100",
     alpha = "Argument of: glmnet::glmnet\n\nThe elasticnet mixing parameter, with 0≤α≤ 1. The penalty is defined as (1-α)/2||β||_2^2+α||β||_1. alpha=1 is the lasso penalty, and alpha=0 the ridge penalty.",
     nlambda = "Argument of: glmnet::glmnet\n\nThe number of lambda values - default is 100.",
     lambda.min.ratio = "Argument of: glmnet::glmnet\n\nSmallest value for lambda, as a fraction of lambda.max, the (data derived) entry value (i.e. the smallest value for which all coefficients are zero). The default depends on the sample size nobs relative to the number of variables nvars. If nobs > nvars, the default is 0.0001, close to zero. If nobs < nvars, the default is 0.01. A very small value of lambda.min.ratio will lead to a saturated fit in the nobs < nvars case. This is undefined for \"binomial\" and \"multinomial\" models, and glmnet will exit gracefully when the percentage deviance explained is almost 1.",
     lambda = "Argument of: glmnet::glmnet\n\nA user supplied lambda sequence. Typical usage is to have the program compute its own lambda sequence based on nlambda and lambda.min.ratio. Supplying a value of lambda overrides this. WARNING: use with care. Avoid supplying a single value for lambda (for predictions after CV use predict() instead). Supply instead a decreasing sequence of lambda values. glmnet relies on its warms starts for speed, and its often faster to fit a whole path than compute a single fit.",
     standardize = "Argument of: glmnet::glmnet\n\nLogical flag for x variable standardization, prior to fitting the model sequence. The coefficients are always returned on the original scale. Default is standardize=TRUE. If variables are in the same units already, you might not wish to standardize. See details below for y standardization with family=\"gaussian\".",
     intercept = "Argument of: glmnet::glmnet\n\nShould intercept(s) be fitted (default=TRUE) or set to zero (FALSE)",
     thresh = "Argument of: glmnet::glmnet\n\nConvergence threshold for coordinate descent. Each inner coordinate-descent loop continues until the maximum change in the objective after any coefficient update is less than thresh times the null deviance. Defaults value is 1E-7.",
     dfmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables in the model. Useful for very large nvars, if a partial path is desired.",
     pmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables ever to be nonzero",
     exclude = "Argument of: glmnet::glmnet\n\nIndices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor (next item).",
     penalty.factor = "Argument of: glmnet::glmnet\n\nSeparate penalty factors can be applied to each coefficient. This is a number that multiplies lambda to allow differential shrinkage. Can be 0 for some variables, which implies no shrinkage, and that variable is always included in the model. Default is 1 for all variables (and implicitly infinity for variables listed in exclude). Note: the penalty factors are internally rescaled to sum to nvars, and the lambda sequence will reflect this change.",
     lower.limits = "Argument of: glmnet::glmnet\n\nVector of lower limits for each coefficient; default -Inf. Each of these must be non-positive. Can be presented as a single value (which will then be replicated), else a vector of length nvars",
     upper.limits = "Argument of: glmnet::glmnet\n\nVector of upper limits for each coefficient; default Inf. See lower.limits",
     maxit = "Argument of: glmnet::glmnet\n\nMaximum number of passes over the data for all lambda values; default is 10^5.",
     type.logistic = "Argument of: glmnet::glmnet\n\nIf \"Newton\" then the exact hessian is used (default), while \"modified.Newton\" uses an upper-bound on the hessian, and can be faster.",
     type.multinomial = "Argument of: glmnet::glmnet\n\nIf \"grouped\" then a grouped lasso penalty is used on the multinomial coefficients for a variable. This ensures they are all in our out together. The default is \"ungrouped\""),
     config = list(), fix.factors.prediction = FALSE), class = c("classif.glmnet",
     "RLearnerClassif", "RLearner", "Learner")), model.subclass = "TuneModel", resampling = structure(list(
     split = 0.666666666666667, id = "holdout", iters = 1L, predict = "test",
     stratify = FALSE), class = c("HoldoutDesc", "ResampleDesc")), measures = list(
     structure(list(id = "mmce", minimize = TRUE, properties = c("classif", "classif.multi",
     "req.pred", "req.truth"), fun = function (task, model, pred, feats, extra.args)
     {
     measureMMCE(pred$data$truth, pred$data$response)
     }, extra.args = list(), best = 0, worst = 1, name = "Mean misclassification error",
     note = "Defined as: mean(response != truth)", aggr = structure(list(id = "test.mean",
     name = "Test mean", fun = function (task, perf.test, perf.train,
     measure, group, pred)
     mean(perf.test), properties = "req.test"), class = "Aggregation")), class = "Measure")),
     opt.pars = structure(list(pars = list(alpha = structure(list(id = "alpha", type = "numeric",
     len = 1L, lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     ))), forbidden = NULL), class = c("LearnerParamSet", "ParamSet")), bit.names = character(0),
     bits.to.features = function ()
     {
     }, control = structure(list(same.resampling.instance = TRUE, impute.val = NULL,
     tune.threshold = FALSE, tune.threshold.args = list(), log.fun = function (learner,
     task, resampling, measures, par.set, control, opt.path, dob, x, y, remove.nas,
     stage, prev.stage)
     {
     x.string = paramValueToString(par.set, x, show.missing.values = !remove.nas)
     if (inherits(learner, "ModelMultiplexer"))
     x.string = stri_replace_all(x.string, "", regex = stri_paste(x$selected.learner,
     "\\."))
     logFunDefault(learner, task, resampling, measures, par.set, control,
     opt.path, dob, x.string, y, remove.nas, stage, prev.stage, prefixes = c("Tune-x",
     "Tune-y"))
     }, final.dw.perc = NULL, extra.args = list(maxit = 100L), budget = 100L), class = c("TuneControlRandom",
     "TuneControl", "OptControl")), show.info = FALSE), class = c("TuneWrapper", "OptWrapper",
     "BaseWrapper", "Learner")), .task = structure(list(type = "classif", env = <environment>,
     weights = NULL, blocking = NULL, coordinates = NULL, task.desc = structure(list(
     id = "multiclass", type = "classif", target = "Species", size = 150L, n.feat = c(numerics = 4L,
     factors = 0L, ordered = 0L, functionals = 0L), has.missings = FALSE, has.weights = FALSE,
     has.blocking = FALSE, has.coordinates = FALSE, class.levels = c("setosa",
     "versicolor", "virginica"), positive = NA_character_, negative = NA_character_,
     class.distribution = structure(c(setosa = 50L, versicolor = 50L, virginica = 50L
     ), .Dim = 3L, .Dimnames = structure(list(c("setosa", "versicolor", "virginica"
     )), .Names = ""), class = "table")), class = c("ClassifTaskDesc", "SupervisedTaskDesc",
     "TaskDesc"))), class = c("ClassifTask", "SupervisedTask", "Task")), .subset = NULL)
     10: tuneParams(.learner$next.learner, .task, .learner$resampling, .learner$measures,
     .learner$opt.pars, .learner$control, .learner$show.info)
     ...
     34: fun3(do.call(predictLearner2, pars))
     35: do.call(predictLearner2, pars)
     36: (function (.learner, .model, .newdata, ...)
     {
     if (.learner$fix.factors.prediction) {
     fls = .model$factor.levels
     ns = names(fls)
     ns = intersect(colnames(.newdata), ns)
     fls = fls[ns]
     if (length(ns) > 0L)
     .newdata[ns] = mapply(factor, x = .newdata[ns], levels = fls, SIMPLIFY = FALSE)
     }
     p = predictLearner(.learner, .model, .newdata, ...)
     p = checkPredictLearnerOutput(.learner, .model, p)
     return(p)
     })(.learner = structure(list(id = "classif.glmnet", type = "classif", package = "glmnet",
     properties = c("numerics", "factors", "prob", "twoclass", "multiclass", "weights"
     ), par.set = structure(list(pars = list(alpha = structure(list(id = "alpha",
     type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = TRUE, default = 1, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), s = structure(list(id = "s", type = "numeric", len = 1L, lower = 0,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "predict"), class = c("LearnerParam", "Param")), exact = structure(list(
     id = "exact", type = "logical", len = 1L, lower = NULL, upper = NULL, values = list(
     `TRUE` = TRUE, `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = FALSE, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "predict"), class = c("LearnerParam", "Param")), nlambda = structure(list(
     id = "nlambda", type = "integer", len = 1L, lower = 1L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 100L, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), lambda.min.ratio = structure(list(id = "lambda.min.ratio", type = "numeric",
     len = 1L, lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), lambda = structure(list(id = "lambda", type = "numericvector", len = NA_integer_,
     lower = 0, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), standardize = structure(list(id = "standardize", type = "logical", len = 1L,
     lower = NULL, upper = NULL, values = list(`TRUE` = TRUE, `FALSE` = FALSE),
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = TRUE, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), intercept = structure(list(id = "intercept", type = "logical", len = 1L,
     lower = NULL, upper = NULL, values = list(`TRUE` = TRUE, `FALSE` = FALSE),
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = TRUE, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), thresh = structure(list(id = "thresh", type = "numeric", len = 1L,
     lower = 0, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-07, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), dfmax = structure(list(id = "dfmax", type = "integer", len = 1L, lower = 0L,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), pmax = structure(list(
     id = "pmax", type = "integer", len = 1L, lower = 0L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), exclude = structure(list(id = "exclude", type = "integervector", len = NA_integer_,
     lower = 1L, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), penalty.factor = structure(list(id = "penalty.factor", type = "numericvector",
     len = NA_integer_, lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), lower.limits = structure(list(id = "lower.limits", type = "numericvector",
     len = NA_integer_, lower = -Inf, upper = 0, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), upper.limits = structure(list(id = "upper.limits", type = "numericvector",
     len = NA_integer_, lower = 0, upper = Inf, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), maxit = structure(list(id = "maxit", type = "integer", len = 1L, lower = 1L,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 100000L, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), type.logistic = structure(list(
     id = "type.logistic", type = "discrete", len = 1L, lower = NULL, upper = NULL,
     values = list(Newton = "Newton", modified.Newton = "modified.Newton"), cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), type.multinomial = structure(list(id = "type.multinomial", type = "discrete",
     len = 1L, lower = NULL, upper = NULL, values = list(ungrouped = "ungrouped",
     grouped = "grouped"), cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), fdev = structure(list(
     id = "fdev", type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1e-05, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), devmax = structure(list(id = "devmax", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 0.999, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), eps = structure(list(
     id = "eps", type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1e-06, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), big = structure(list(id = "big", type = "numeric", len = 1L, lower = -Inf,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 9.9e+35, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), mnlam = structure(list(
     id = "mnlam", type = "integer", len = 1L, lower = 1, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 5, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), pmin = structure(list(id = "pmin", type = "numeric", len = 1L, lower = 0,
     upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 1e-09, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), exmx = structure(list(
     id = "exmx", type = "numeric", len = 1L, lower = -Inf, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 250, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), prec = structure(list(id = "prec", type = "numeric", len = 1L, lower = -Inf,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 1e-10, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), mxit = structure(list(
     id = "mxit", type = "integer", len = 1L, lower = 1L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 100L, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param"))), forbidden = NULL), class = c("LearnerParamSet", "ParamSet")), par.vals = list(
     s = 0.01, alpha = 0.257216746453196), predict.type = "response", name = "GLM with Lasso or Elasticnet Regularization",
     short.name = "glmnet", note = "The family parameter is set to `binomial` for two-class problems and to `multinomial` otherwise.\n Factors automatically get converted to dummy columns, ordered factors to integer.\n Parameter `s` (value of the regularization parameter used for predictions) is set to `0.1` by default,\n but needs to be tuned by the user.\n glmnet uses a global control object for its parameters. mlr resets all control parameters to their defaults\n before setting the specified parameters and after training.\n If you are setting glmnet.control parameters through glmnet.control,\n you need to save and re-set them after running the glmnet learner.",
     callees = c("glmnet", "glmnet.control", "predict.glmnet"), help.list = list(s = "Argument of: glmnet::predict.glmnet\n\nValue(s) of the penalty parameter lambda at which predictions are required. Default is the entire sequence used to create the model.",
     exact = "Argument of: glmnet::predict.glmnet\n\nThis argument is relevant only when predictions are made at values of s (lambda) different from those used in the fitting of the original model. If exact=FALSE (default), then the predict function uses linear interpolation to make predictions for values of s (lambda) that do not coincide with those used in the fitting algorithm. While this is often a good approximation, it can sometimes be a bit coarse. With exact=TRUE, these different values of s are merged (and sorted) with object$lambda, and the model is refit before predictions are made. In this case, it is required to supply the original data x= and y= as additional named arguments to predict() or coef(). The workhorse predict.glmnet() needs to update the model, and so needs the data used to create it. The same is true of weights, offset, penalty.factor, lower.limits, upper.limits if these were used in the original call. Failure to do so will result in an error.",
     fdev = "Argument of: glmnet::glmnet.control\n\nminimum fractional change in deviance for stopping path; factory default = 1.0e-5",
     devmax = "Argument of: glmnet::glmnet.control\n\nmaximum fraction of explained deviance for stopping path; factory default = 0.999",
     eps = "Argument of: glmnet::glmnet.control\n\nminimum value of lambda.min.ratio (see glmnet); factory default= 1.0e-6",
     big = "Argument of: glmnet::glmnet.control\n\nlarge floating point number; factory default = 9.9e35. Inf in definition of upper.limit is set to big",
     mnlam = "Argument of: glmnet::glmnet.control\n\nminimum number of path points (lambda values) allowed; factory default = 5",
     pmin = "Argument of: glmnet::glmnet.control\n\nminimum probability for any class. factory default = 1.0e-9. Note that this implies a pmax of 1-pmin.",
     exmx = "Argument of: glmnet::glmnet.control\n\nmaximum allowed exponent. factory default = 250.0",
     prec = "Argument of: glmnet::glmnet.control\n\nconvergence threshold for multi response bounds adjustment solution. factory default = 1.0e-10",
     mxit = "Argument of: glmnet::glmnet.control\n\nmaximum iterations for multiresponse bounds adjustment solution. factory default = 100",
     alpha = "Argument of: glmnet::glmnet\n\nThe elasticnet mixing parameter, with 0≤α≤ 1. The penalty is defined as (1-α)/2||β||_2^2+α||β||_1. alpha=1 is the lasso penalty, and alpha=0 the ridge penalty.",
     nlambda = "Argument of: glmnet::glmnet\n\nThe number of lambda values - default is 100.",
     lambda.min.ratio = "Argument of: glmnet::glmnet\n\nSmallest value for lambda, as a fraction of lambda.max, the (data derived) entry value (i.e. the smallest value for which all coefficients are zero). The default depends on the sample size nobs relative to the number of variables nvars. If nobs > nvars, the default is 0.0001, close to zero. If nobs < nvars, the default is 0.01. A very small value of lambda.min.ratio will lead to a saturated fit in the nobs < nvars case. This is undefined for \"binomial\" and \"multinomial\" models, and glmnet will exit gracefully when the percentage deviance explained is almost 1.",
     lambda = "Argument of: glmnet::glmnet\n\nA user supplied lambda sequence. Typical usage is to have the program compute its own lambda sequence based on nlambda and lambda.min.ratio. Supplying a value of lambda overrides this. WARNING: use with care. Avoid supplying a single value for lambda (for predictions after CV use predict() instead). Supply instead a decreasing sequence of lambda values. glmnet relies on its warms starts for speed, and its often faster to fit a whole path than compute a single fit.",
     standardize = "Argument of: glmnet::glmnet\n\nLogical flag for x variable standardization, prior to fitting the model sequence. The coefficients are always returned on the original scale. Default is standardize=TRUE. If variables are in the same units already, you might not wish to standardize. See details below for y standardization with family=\"gaussian\".",
     intercept = "Argument of: glmnet::glmnet\n\nShould intercept(s) be fitted (default=TRUE) or set to zero (FALSE)",
     thresh = "Argument of: glmnet::glmnet\n\nConvergence threshold for coordinate descent. Each inner coordinate-descent loop continues until the maximum change in the objective after any coefficient update is less than thresh times the null deviance. Defaults value is 1E-7.",
     dfmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables in the model. Useful for very large nvars, if a partial path is desired.",
     pmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables ever to be nonzero",
     exclude = "Argument of: glmnet::glmnet\n\nIndices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor (next item).",
     penalty.factor = "Argument of: glmnet::glmnet\n\nSeparate penalty factors can be applied to each coefficient. This is a number that multiplies lambda to allow differential shrinkage. Can be 0 for some variables, which implies no shrinkage, and that variable is always included in the model. Default is 1 for all variables (and implicitly infinity for variables listed in exclude). Note: the penalty factors are internally rescaled to sum to nvars, and the lambda sequence will reflect this change.",
     lower.limits = "Argument of: glmnet::glmnet\n\nVector of lower limits for each coefficient; default -Inf. Each of these must be non-positive. Can be presented as a single value (which will then be replicated), else a vector of length nvars",
     upper.limits = "Argument of: glmnet::glmnet\n\nVector of upper limits for each coefficient; default Inf. See lower.limits",
     maxit = "Argument of: glmnet::glmnet\n\nMaximum number of passes over the data for all lambda values; default is 10^5.",
     type.logistic = "Argument of: glmnet::glmnet\n\nIf \"Newton\" then the exact hessian is used (default), while \"modified.Newton\" uses an upper-bound on the hessian, and can be faster.",
     type.multinomial = "Argument of: glmnet::glmnet\n\nIf \"grouped\" then a grouped lasso penalty is used on the multinomial coefficients for a variable. This ensures they are all in our out together. The default is \"ungrouped\""),
     config = list(), fix.factors.prediction = FALSE), class = c("classif.glmnet",
     "RLearnerClassif", "RLearner", "Learner")), .model = structure(list(learner = structure(list(
     id = "classif.glmnet", type = "classif", package = "glmnet", properties = c("numerics",
     "factors", "prob", "twoclass", "multiclass", "weights"), par.set = structure(list(
     pars = list(alpha = structure(list(id = "alpha", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), s = structure(list(id = "s", type = "numeric", len = 1L, lower = 0, upper = Inf,
     values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "predict"), class = c("LearnerParam", "Param")), exact = structure(list(
     id = "exact", type = "logical", len = 1L, lower = NULL, upper = NULL,
     values = list(`TRUE` = TRUE, `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = FALSE, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "predict"), class = c("LearnerParam", "Param"
     )), nlambda = structure(list(id = "nlambda", type = "integer", len = 1L,
     lower = 1L, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 100L, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), lambda.min.ratio = structure(list(id = "lambda.min.ratio", type = "numeric",
     len = 1L, lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), lambda = structure(list(id = "lambda", type = "numericvector", len = NA_integer_,
     lower = 0, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), standardize = structure(list(id = "standardize", type = "logical", len = 1L,
     lower = NULL, upper = NULL, values = list(`TRUE` = TRUE, `FALSE` = FALSE),
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = TRUE,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), intercept = structure(list(
     id = "intercept", type = "logical", len = 1L, lower = NULL, upper = NULL,
     values = list(`TRUE` = TRUE, `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = TRUE, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), thresh = structure(list(id = "thresh", type = "numeric", len = 1L, lower = 0,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 1e-07, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), dfmax = structure(list(
     id = "dfmax", type = "integer", len = 1L, lower = 0L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), pmax = structure(list(
     id = "pmax", type = "integer", len = 1L, lower = 0L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), exclude = structure(list(
     id = "exclude", type = "integervector", len = NA_integer_, lower = 1L,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), penalty.factor = structure(list(
     id = "penalty.factor", type = "numericvector", len = NA_integer_, lower = 0,
     upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), lower.limits = structure(list(
     id = "lower.limits", type = "numericvector", len = NA_integer_, lower = -Inf,
     upper = 0, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), upper.limits = structure(list(
     id = "upper.limits", type = "numericvector", len = NA_integer_, lower = 0,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), maxit = structure(list(
     id = "maxit", type = "integer", len = 1L, lower = 1L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 100000L,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), type.logistic = structure(list(
     id = "type.logistic", type = "discrete", len = 1L, lower = NULL, upper = NULL,
     values = list(Newton = "Newton", modified.Newton = "modified.Newton"),
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), type.multinomial = structure(list(
     id = "type.multinomial", type = "discrete", len = 1L, lower = NULL, upper = NULL,
     values = list(ungrouped = "ungrouped", grouped = "grouped"), cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), fdev = structure(list(id = "fdev", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-05, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), devmax = structure(list(id = "devmax", type = "numeric", len = 1L, lower = 0,
     upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 0.999, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), eps = structure(list(
     id = "eps", type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1e-06,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), big = structure(list(
     id = "big", type = "numeric", len = 1L, lower = -Inf, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 9.9e+35,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), mnlam = structure(list(
     id = "mnlam", type = "integer", len = 1L, lower = 1, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 5, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), pmin = structure(list(id = "pmin", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-09, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), exmx = structure(list(id = "exmx", type = "numeric", len = 1L, lower = -Inf,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 250, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), prec = structure(list(
     id = "prec", type = "numeric", len = 1L, lower = -Inf, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1e-10,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), mxit = structure(list(
     id = "mxit", type = "integer", len = 1L, lower = 1L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 100L,
     trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param"))), forbidden = NULL), class = c("LearnerParamSet",
     "ParamSet")), par.vals = list(s = 0.01, alpha = 0.257216746453196), predict.type = "response",
     name = "GLM with Lasso or Elasticnet Regularization", short.name = "glmnet",
     note = "The family parameter is set to `binomial` for two-class problems and to `multinomial` otherwise.\n Factors automatically get converted to dummy columns, ordered factors to integer.\n Parameter `s` (value of the regularization parameter used for predictions) is set to `0.1` by default,\n but needs to be tuned by the user.\n glmnet uses a global control object for its parameters. mlr resets all control parameters to their defaults\n before setting the specified parameters and after training.\n If you are setting glmnet.control parameters through glmnet.control,\n you need to save and re-set them after running the glmnet learner.",
     callees = c("glmnet", "glmnet.control", "predict.glmnet"), help.list = list(s = "Argument of: glmnet::predict.glmnet\n\nValue(s) of the penalty parameter lambda at which predictions are required. Default is the entire sequence used to create the model.",
     exact = "Argument of: glmnet::predict.glmnet\n\nThis argument is relevant only when predictions are made at values of s (lambda) different from those used in the fitting of the original model. If exact=FALSE (default), then the predict function uses linear interpolation to make predictions for values of s (lambda) that do not coincide with those used in the fitting algorithm. While this is often a good approximation, it can sometimes be a bit coarse. With exact=TRUE, these different values of s are merged (and sorted) with object$lambda, and the model is refit before predictions are made. In this case, it is required to supply the original data x= and y= as additional named arguments to predict() or coef(). The workhorse predict.glmnet() needs to update the model, and so needs the data used to create it. The same is true of weights, offset, penalty.factor, lower.limits, upper.limits if these were used in the original call. Failure to do so will result in an error.",
     fdev = "Argument of: glmnet::glmnet.control\n\nminimum fractional change in deviance for stopping path; factory default = 1.0e-5",
     devmax = "Argument of: glmnet::glmnet.control\n\nmaximum fraction of explained deviance for stopping path; factory default = 0.999",
     eps = "Argument of: glmnet::glmnet.control\n\nminimum value of lambda.min.ratio (see glmnet); factory default= 1.0e-6",
     big = "Argument of: glmnet::glmnet.control\n\nlarge floating point number; factory default = 9.9e35. Inf in definition of upper.limit is set to big",
     mnlam = "Argument of: glmnet::glmnet.control\n\nminimum number of path points (lambda values) allowed; factory default = 5",
     pmin = "Argument of: glmnet::glmnet.control\n\nminimum probability for any class. factory default = 1.0e-9. Note that this implies a pmax of 1-pmin.",
     exmx = "Argument of: glmnet::glmnet.control\n\nmaximum allowed exponent. factory default = 250.0",
     prec = "Argument of: glmnet::glmnet.control\n\nconvergence threshold for multi response bounds adjustment solution. factory default = 1.0e-10",
     mxit = "Argument of: glmnet::glmnet.control\n\nmaximum iterations for multiresponse bounds adjustment solution. factory default = 100",
     alpha = "Argument of: glmnet::glmnet\n\nThe elasticnet mixing parameter, with 0≤α≤ 1. The penalty is defined as (1-α)/2||β||_2^2+α||β||_1. alpha=1 is the lasso penalty, and alpha=0 the ridge penalty.",
     nlambda = "Argument of: glmnet::glmnet\n\nThe number of lambda values - default is 100.",
     lambda.min.ratio = "Argument of: glmnet::glmnet\n\nSmallest value for lambda, as a fraction of lambda.max, the (data derived) entry value (i.e. the smallest value for which all coefficients are zero). The default depends on the sample size nobs relative to the number of variables nvars. If nobs > nvars, the default is 0.0001, close to zero. If nobs < nvars, the default is 0.01. A very small value of lambda.min.ratio will lead to a saturated fit in the nobs < nvars case. This is undefined for \"binomial\" and \"multinomial\" models, and glmnet will exit gracefully when the percentage deviance explained is almost 1.",
     lambda = "Argument of: glmnet::glmnet\n\nA user supplied lambda sequence. Typical usage is to have the program compute its own lambda sequence based on nlambda and lambda.min.ratio. Supplying a value of lambda overrides this. WARNING: use with care. Avoid supplying a single value for lambda (for predictions after CV use predict() instead). Supply instead a decreasing sequence of lambda values. glmnet relies on its warms starts for speed, and its often faster to fit a whole path than compute a single fit.",
     standardize = "Argument of: glmnet::glmnet\n\nLogical flag for x variable standardization, prior to fitting the model sequence. The coefficients are always returned on the original scale. Default is standardize=TRUE. If variables are in the same units already, you might not wish to standardize. See details below for y standardization with family=\"gaussian\".",
     intercept = "Argument of: glmnet::glmnet\n\nShould intercept(s) be fitted (default=TRUE) or set to zero (FALSE)",
     thresh = "Argument of: glmnet::glmnet\n\nConvergence threshold for coordinate descent. Each inner coordinate-descent loop continues until the maximum change in the objective after any coefficient update is less than thresh times the null deviance. Defaults value is 1E-7.",
     dfmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables in the model. Useful for very large nvars, if a partial path is desired.",
     pmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables ever to be nonzero",
     exclude = "Argument of: glmnet::glmnet\n\nIndices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor (next item).",
     penalty.factor = "Argument of: glmnet::glmnet\n\nSeparate penalty factors can be applied to each coefficient. This is a number that multiplies lambda to allow differential shrinkage. Can be 0 for some variables, which implies no shrinkage, and that variable is always included in the model. Default is 1 for all variables (and implicitly infinity for variables listed in exclude). Note: the penalty factors are internally rescaled to sum to nvars, and the lambda sequence will reflect this change.",
     lower.limits = "Argument of: glmnet::glmnet\n\nVector of lower limits for each coefficient; default -Inf. Each of these must be non-positive. Can be presented as a single value (which will then be replicated), else a vector of length nvars",
     upper.limits = "Argument of: glmnet::glmnet\n\nVector of upper limits for each coefficient; default Inf. See lower.limits",
     maxit = "Argument of: glmnet::glmnet\n\nMaximum number of passes over the data for all lambda values; default is 10^5.",
     type.logistic = "Argument of: glmnet::glmnet\n\nIf \"Newton\" then the exact hessian is used (default), while \"modified.Newton\" uses an upper-bound on the hessian, and can be faster.",
     type.multinomial = "Argument of: glmnet::glmnet\n\nIf \"grouped\" then a grouped lasso penalty is used on the multinomial coefficients for a variable. This ensures they are all in our out together. The default is \"ungrouped\""),
     config = list(), fix.factors.prediction = FALSE), class = c("classif.glmnet",
     "RLearnerClassif", "RLearner", "Learner")), learner.model = structure(list(a0 = structure(c(0.0788477036535311,
     -0.0706840303174326, -0.00816367333609857, 0.0788477036535312, -0.0706840303174325,
     -0.00816367333609862, 0.0788477036535312, -0.0706840303174327, -0.00816367333609857,
     0.0788477036535312, -0.0706840303174325, -0.00816367333609866, 0.0788477036535313,
     -0.0706840303174327, -0.00816367333609861), .Dim = c(3L, 5L), .Dimnames = list(c("setosa",
     "versicolor", "virginica"), c("s0", "s1", "s2", "s3", "s4"))), beta = list(setosa = new("dgCMatrix",
     i = c(0L, 0L, 0L, 0L, 0L), p = 0:5, Dim = 4:5, Dimnames = list(c("Sepal.Length",
     "Sepal.Width", "Petal.Length", "Petal.Width"), c("s0", "s1", "s2", "s3", "s4"
     )), x = c(0, 0, 0, 0, 0), factors = list()), versicolor = new("dgCMatrix", i = c(0L,
     0L, 0L, 0L, 0L), p = 0:5, Dim = 4:5, Dimnames = list(c("Sepal.Length", "Sepal.Width",
     "Petal.Length", "Petal.Width"), c("s0", "s1", "s2", "s3", "s4")), x = c(0, 0, 0,
     0, 0), factors = list()), virginica = new("dgCMatrix", i = c(0L, 0L, 0L, 0L, 0L),
     p = 0:5, Dim = 4:5, Dimnames = list(c("Sepal.Length", "Sepal.Width", "Petal.Length",
     "Petal.Width"), c("s0", "s1", "s2", "s3", "s4")), x = c(0, 0, 0, 0, 0), factors = list())),
     dfmat = structure(c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(3L,
     5L), .Dimnames = list(c("setosa", "versicolor", "virginica"), c("s0", "s1", "s2",
     "s3", "s4"))), df = function (x, df1, df2, ncp, log = FALSE)
     {
     if (missing(ncp))
     .Call(C_df, x, df1, df2, log)
     else .Call(C_dnf, x, df1, df2, ncp, log)
     }, dim = 4:5, lambda = c(NaN, NaN, NaN, NaN, NaN), dev.ratio = c(9.63871610715682e-16,
     9.63871610715682e-16, 9.65453348743523e-16, 9.63871610715682e-16, 9.63871610715682e-16
     ), nulldev = 219.343967893912, npasses = 18L, jerr = 0L, offset = FALSE, classnames = c("setosa",
     "versicolor", "virginica"), grouped = FALSE, call = (function (x, y, family = c("gaussian",
     "binomial", "poisson", "multinomial", "cox", "mgaussian"), weights, offset = NULL,
     alpha = 1, nlambda = 100, lambda.min.ratio = ifelse(nobs < nvars, 0.01, 1e-04),
     lambda = NULL, standardize = TRUE, intercept = TRUE, thresh = 1e-07, dfmax = nvars +
     1, pmax = min(dfmax * 2 + 20, nvars), exclude, penalty.factor = rep(1,
     nvars), lower.limits = -Inf, upper.limits = Inf, maxit = 1e+05, type.gaussian = ifelse(nvars <
     500, "covariance", "naive"), type.logistic = c("Newton", "modified.Newton"),
     standardize.response = FALSE, type.multinomial = c("ungrouped", "grouped"))
     {
     family = match.arg(family)
     if (alpha > 1) {
     warning("alpha >1; set to 1")
     alpha = 1
     }
     if (alpha < 0) {
     warning("alpha<0; set to 0")
     alpha = 0
     }
     alpha = as.double(alpha)
     this.call = match.call()
     nlam = as.integer(nlambda)
     y = drop(y)
     np = dim(x)
     if (is.null(np) | (np[2] <= 1))
     stop("x should be a matrix with 2 or more columns")
     nobs = as.integer(np[1])
     if (missing(weights))
     weights = rep(1, nobs)
     else if (length(weights) != nobs)
     stop(paste("number of elements in weights (", length(weights), ") not equal to the number of rows of x (",
     nobs, ")", sep = ""))
     nvars = as.integer(np[2])
     dimy = dim(y)
     nrowy = ifelse(is.null(dimy), length(y), dimy[1])
     if (nrowy != nobs)
     stop(paste("number of observations in y (", nrowy, ") not equal to the number of rows of x (",
     nobs, ")", sep = ""))
     vnames = colnames(x)
     if (is.null(vnames))
     vnames = paste("V", seq(nvars), sep = "")
     ne = as.integer(dfmax)
     nx = as.integer(pmax)
     if (missing(exclude))
     exclude = integer(0)
     if (any(penalty.factor == Inf)) {
     exclude = c(exclude, seq(nvars)[penalty.factor == Inf])
     exclude = sort(unique(exclude))
     }
     if (length(exclude) > 0) {
     jd = match(exclude, seq(nvars), 0)
     if (!all(jd > 0))
     stop("Some excluded variables out of range")
     penalty.factor[jd] = 1
     jd = as.integer(c(length(jd), jd))
     }
     else jd = as.integer(0)
     vp = as.double(penalty.factor)
     internal.parms = glmnet.control()
     if (any(lower.limits > 0)) {
     stop("Lower limits should be non-positive")
     }
     if (any(upper.limits < 0)) {
     stop("Upper limits should be non-negative")
     }
     lower.limits[lower.limits == -Inf] = -internal.parms$big
     upper.limits[upper.limits == Inf] = internal.parms$big
     if (length(lower.limits) < nvars) {
     if (length(lower.limits) == 1)
     lower.limits = rep(lower.limits, nvars)
     else stop("Require length 1 or nvars lower.limits")
     }
     else lower.limits = lower.limits[seq(nvars)]
     if (length(upper.limits) < nvars) {
     if (length(upper.limits) == 1)
     upper.limits = rep(upper.limits, nvars)
     else stop("Require length 1 or nvars upper.limits")
     }
     else upper.limits = upper.limits[seq(nvars)]
     cl = rbind(lower.limits, upper.limits)
     if (any(cl == 0)) {
     fdev = glmnet.control()$fdev
     if (fdev != 0) {
     glmnet.control(fdev = 0)
     on.exit(glmnet.control(fdev = fdev))
     }
     }
     storage.mode(cl) = "double"
     isd = as.integer(standardize)
     intr = as.integer(intercept)
     if (!missing(intercept) && family == "cox")
     warning("Cox model has no intercept")
     jsd = as.integer(standardize.response)
     thresh = as.double(thresh)
     if (is.null(lambda)) {
     if (lambda.min.ratio >= 1)
     stop("lambda.min.ratio should be less than 1")
     flmin = as.double(lambda.min.ratio)
     ulam = double(1)
     }
     else {
     flmin = as.double(1)
     if (any(lambda < 0))
     stop("lambdas should be non-negative")
     ulam = as.double(rev(sort(lambda)))
     nlam = as.integer(length(lambda))
     }
     is.sparse = FALSE
     ix = jx = NULL
     if (inherits(x, "sparseMatrix")) {
     is.sparse = TRUE
     x = as(x, "CsparseMatrix")
     x = as(x, "dgCMatrix")
     ix = as.integer(x@p + 1)
     jx = as.integer(x@i + 1)
     x = as.double(x@x)
     }
     kopt = switch(match.arg(type.logistic), Newton = 0, modified.Newton = 1)
     if (family == "multinomial") {
     type.multinomial = match.arg(type.multinomial)
     if (type.multinomial == "grouped")
     kopt = 2
     }
     kopt = as.integer(kopt)
     fit = switch(family, gaussian = elnet(x, is.sparse, ix, jx, y, weights, offset,
     type.gaussian, alpha, nobs, nvars, jd, vp, cl, ne, nx, nlam, flmin, ulam,
     thresh, isd, intr, vnames, maxit), poisson = fishnet(x, is.sparse, ix,
     jx, y, weights, offset, alpha, nobs, nvars, jd, vp, cl, ne, nx, nlam,
     flmin, ulam, thresh, isd, intr, vnames, maxit), binomial = lognet(x,
     is.sparse, ix, jx, y, weights, offset, alpha, nobs, nvars, jd, vp, cl,
     ne, nx, nlam, flmin, ulam, thresh, isd, intr, vnames, maxit, kopt, family),
     multinomial = lognet(x, is.sparse, ix, jx, y, weights, offset, alpha,
     nobs, nvars, jd, vp, cl, ne, nx, nlam, flmin, ulam, thresh, isd,
     intr, vnames, maxit, kopt, family), cox = coxnet(x, is.sparse, ix,
     jx, y, weights, offset, alpha, nobs, nvars, jd, vp, cl, ne, nx, nlam,
     flmin, ulam, thresh, isd, vnames, maxit), mgaussian = mrelnet(x,
     is.sparse, ix, jx, y, weights, offset, alpha, nobs, nvars, jd, vp,
     cl, ne, nx, nlam, flmin, ulam, thresh, isd, jsd, intr, vnames, maxit))
     if (is.null(lambda))
     fit$lambda = fix.lam(fit$lambda)
     fit$call = this.call
     fit$nobs = nobs
     class(fit) = c(class(fit), "glmnet")
     fit
     })(x = structure(c(5, 7.7, 5, 7.2, 6.4, 4.6, 6.8, 6.1, 6, 5.6, 6.3, 6.1, 5, 6.9,
     4.3, 5.6, 5.2, 5.4, 5.9, 7.2, 6.4, 5.5, 5.5, 6.2, 5.8, 5.6, 5.8, 6.1, 5, 5.1,
     6.3, 7.3, 5.8, 6.7, 4.7, 6.5, 6.7, 4.8, 6.1, 5, 5.7, 4.8, 5.1, 5.1, 5.4, 5.8,
     6.8, 4.6, 5.2, 5.7, 5, 7.9, 6.3, 4.8, 5.5, 5.1, 5.7, 5.9, 7.4, 4.9, 5.2, 4.4,
     5.5, 5.1, 5.8, 4.4, 6.2, 6.3, 6.7, 6.4, 6.5, 5, 5.7, 5.1, 5.5, 7.6, 5.2, 7.7,
     6.5, 5, 6.9, 5.1, 7.7, 6.9, 6.7, 6.1, 6, 6.3, 5.8, 5.4, 6, 5.4, 5.6, 4.9, 5.6,
     5.7, 4.4, 5.7, 6.3, 5, 3.5, 3.8, 2, 3, 3.1, 3.4, 2.8, 3, 2.9, 2.9, 2.8, 2.9,
     2.3, 3.1, 3, 2.8, 4.1, 3.9, 3, 3.2, 3.2, 2.5, 2.4, 2.8, 2.7, 3, 2.7, 2.8, 3.2,
     3.5, 2.5, 2.9, 2.7, 3, 3.2, 2.8, 3.1, 3.4, 2.6, 3, 4.4, 3, 3.8, 3.4, 3.9, 4,
     3, 3.2, 3.5, 2.5, 3.6, 3.8, 3.4, 3.4, 2.3, 3.8, 2.9, 3.2, 2.8, 3.1, 2.7, 2.9,
     4.2, 3.3, 2.6, 3, 2.2, 2.7, 3.1, 2.7, 3, 3.3, 2.8, 3.5, 3.5, 3, 3.4, 2.6, 3.2,
     3.4, 3.2, 3.8, 2.8, 3.1, 3.3, 3, 3, 3.3, 2.8, 3.7, 3.4, 3, 2.7, 3.6, 2.5, 2.6,
     3.2, 2.8, 2.9, 3.4, 1.6, 6.7, 3.5, 5.8, 5.5, 1.4, 4.8, 4.9, 4.5, 3.6, 5.1, 4.7,
     3.3, 5.1, 1.1, 4.9, 1.5, 1.7, 5.1, 6, 5.3, 4, 3.7, 4.8, 3.9, 4.1, 4.1, 4.7, 1.2,
     1.4, 5, 6.3, 5.1, 5.2, 1.3, 4.6, 4.7, 1.9, 5.6, 1.6, 1.5, 1.4, 1.9, 1.5, 1.3,
     1.2, 5.5, 1.4, 1.5, 5, 1.4, 6.4, 5.6, 1.6, 4, 1.5, 4.2, 4.8, 6.1, 1.5, 3.9, 1.4,
     1.4, 1.7, 4, 1.3, 4.5, 4.9, 4.4, 5.3, 5.2, 1.4, 4.5, 1.4, 1.3, 6.6, 1.4, 6.9,
     5.1, 1.5, 5.7, 1.6, 6.7, 4.9, 5.7, 4.6, 4.8, 4.7, 5.1, 1.5, 4.5, 4.5, 4.2, 1.4,
     3.9, 3.5, 1.3, 4.1, 5.6, 1.6, 0.6, 2.2, 1, 1.6, 1.8, 0.3, 1.4, 1.8, 1.5, 1.3,
     1.5, 1.4, 1, 2.3, 0.1, 2, 0.1, 0.4, 1.8, 1.8, 2.3, 1.3, 1, 1.8, 1.2, 1.3, 1,
     1.2, 0.2, 0.3, 1.9, 1.8, 1.9, 2.3, 0.2, 1.5, 1.5, 0.2, 1.4, 0.2, 0.4, 0.3, 0.4,
     0.2, 0.4, 0.2, 2.1, 0.2, 0.2, 2, 0.2, 2, 2.4, 0.2, 1.3, 0.3, 1.3, 1.8, 1.9, 0.2,
     1.4, 0.2, 0.2, 0.5, 1.2, 0.2, 1.5, 1.8, 1.4, 1.9, 2, 0.2, 1.3, 0.2, 0.2, 2.1,
     0.2, 2.3, 2, 0.2, 2.3, 0.2, 2, 1.5, 2.5, 1.4, 1.8, 1.6, 2.4, 0.2, 1.6, 1.5, 1.3,
     0.1, 1.1, 1, 0.2, 1.3, 1.8, 0.4), .Dim = c(100L, 4L), .Dimnames = list(c("44",
     "118", "61", "130", "138", "7", "77", "128", "79", "65", "134", "64", "94", "142",
     "14", "122", "33", "6", "150", "126", "116", "90", "82", "127", "83", "89", "68",
     "74", "36", "18", "147", "108", "143", "146", "3", "55", "87", "25", "135", "26",
     "16", "46", "45", "40", "17", "15", "113", "48", "28", "114", "5", "132", "137",
     "12", "54", "20", "97", "71", "131", "35", "60", "9", "34", "24", "93", "39",
     "69", "124", "66", "112", "148", "50", "56", "1", "37", "106", "29", "119", "111",
     "8", "121", "47", "123", "53", "145", "92", "139", "57", "115", "11", "86", "85",
     "95", "38", "70", "80", "43", "100", "104", "27"), c("Sepal.Length", "Sepal.Width",
     "Petal.Length", "Petal.Width"))), y = structure(c(1L, 3L, 2L, 3L, 3L, 1L, 2L,
     3L, 2L, 2L, 3L, 2L, 2L, 3L, 1L, 3L, 1L, 1L, 3L, 3L, 3L, 2L, 2L, 3L, 2L, 2L, 2L,
     2L, 1L, 1L, 3L, 3L, 3L, 3L, 1L, 2L, 2L, 1L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L,
     1L, 1L, 3L, 1L, 3L, 3L, 1L, 2L, 1L, 2L, 2L, 3L, 1L, 2L, 1L, 1L, 1L, 2L, 1L, 2L,
     3L, 2L, 3L, 3L, 1L, 2L, 1L, 1L, 3L, 1L, 3L, 3L, 1L, 3L, 1L, 3L, 2L, 3L, 2L, 3L,
     2L, 3L, 1L, 2L, 2L, 2L, 1L, 2L, 2L, 1L, 2L, 3L, 1L), .Label = c("setosa", "versicolor",
     "virginica"), class = "factor"), family = "multinomial", alpha = 0.257216746453196),
     nobs = 100L), class = c("multnet", "glmnet"), mlr.train.info = structure(list(
     factors = structure(list(), .Names = character(0)), ordered = structure(list(), .Names = character(0)),
     restore.levels = FALSE, factors.to.dummies = FALSE, ordered.to.int = FALSE), class = "FixDataInfo")),
     task.desc = structure(list(id = "multiclass", type = "classif", target = "Species",
     size = 100L, n.feat = c(numerics = 4L, factors = 0L, ordered = 0L, functionals = 0L
     ), has.missings = FALSE, has.weights = FALSE, has.blocking = FALSE, has.coordinates = FALSE,
     class.levels = c("setosa", "versicolor", "virginica"), positive = NA_character_,
     negative = NA_character_, class.distribution = structure(c(setosa = 36L,
     versicolor = 31L, virginica = 33L), .Dim = 3L, .Dimnames = structure(list(
     c("setosa", "versicolor", "virginica")), .Names = ""), class = "table")), class = c("ClassifTaskDesc",
     "SupervisedTaskDesc", "TaskDesc")), subset = c(44L, 118L, 61L, 130L, 138L, 7L,
     77L, 128L, 79L, 65L, 134L, 64L, 94L, 142L, 14L, 122L, 33L, 6L, 150L, 126L, 116L,
     90L, 82L, 127L, 83L, 89L, 68L, 74L, 36L, 18L, 147L, 108L, 143L, 146L, 3L, 55L,
     87L, 25L, 135L, 26L, 16L, 46L, 45L, 40L, 17L, 15L, 113L, 48L, 28L, 114L, 5L,
     132L, 137L, 12L, 54L, 20L, 97L, 71L, 131L, 35L, 60L, 9L, 34L, 24L, 93L, 39L,
     69L, 124L, 66L, 112L, 148L, 50L, 56L, 1L, 37L, 106L, 29L, 119L, 111L, 8L, 121L,
     47L, 123L, 53L, 145L, 92L, 139L, 57L, 115L, 11L, 86L, 85L, 95L, 38L, 70L, 80L,
     43L, 100L, 104L, 27L), features = c("Sepal.Length", "Sepal.Width", "Petal.Length",
     "Petal.Width"), factor.levels = list(Species = c("setosa", "versicolor", "virginica"
     )), time = 0.0110000000000099, dump = NULL), class = "WrappedModel"), .newdata = structure(list(
     Sepal.Length = c(6.3, 6.7, 6.3, 6.6, 5.4, 4.5, 5.1, 6.7, 6, 6.4, 4.9, 6.6, 7.2,
     5.3, 4.9, 6.7, 6.5, 6.2, 5, 5.4, 4.9, 4.8, 6, 5.7, 5.1, 7, 6.4, 6.2, 6.4, 5.5,
     6.1, 6.9, 5.8, 7.1, 4.7, 4.8, 6.4, 5.5, 4.9, 6, 7.7, 6.8, 6.3, 5.9, 6.7, 4.6,
     5.7, 5.6, 4.6, 6.5), Sepal.Width = c(2.5, 3.1, 2.3, 2.9, 3.4, 2.3, 2.5, 3, 2.7,
     2.9, 2.5, 3, 3.6, 3.7, 3, 3.3, 3, 3.4, 3.5, 3.4, 2.4, 3.1, 2.2, 3.8, 3.7, 3.2,
     2.8, 2.9, 2.8, 2.4, 2.8, 3.1, 2.7, 3, 3.2, 3, 3.2, 2.6, 3.1, 2.2, 3, 3.2, 3.3,
     3, 2.5, 3.6, 3, 3, 3.1, 3), Petal.Length = c(4.9, 5.6, 4.4, 4.6, 1.7, 1.3, 3,
     5, 5.1, 4.3, 4.5, 4.4, 6.1, 1.5, 1.4, 5.7, 5.8, 5.4, 1.3, 1.5, 3.3, 1.6, 4, 1.7,
     1.5, 4.7, 5.6, 4.3, 5.6, 3.8, 4, 5.4, 5.1, 5.9, 1.6, 1.4, 4.5, 4.4, 1.5, 5, 6.1,
     5.9, 6, 4.2, 5.8, 1, 4.2, 4.5, 1.5, 5.5), Petal.Width = c(1.5, 2.4, 1.3, 1.3,
     0.2, 0.3, 1.1, 1.7, 1.6, 1.3, 1.7, 1.4, 2.5, 0.2, 0.2, 2.1, 2.2, 2.3, 0.3, 0.4,
     1, 0.2, 1, 0.3, 0.4, 1.4, 2.2, 1.3, 2.1, 1.1, 1.3, 2.1, 1.9, 2.1, 0.2, 0.1, 1.5,
     1.2, 0.1, 1.5, 2.3, 2.3, 2.5, 1.5, 1.8, 0.2, 1.2, 1.5, 0.2, 1.8)), class = "data.frame", row.names = c(73L,
     141L, 88L, 59L, 21L, 42L, 99L, 78L, 84L, 75L, 107L, 76L, 110L, 49L, 2L, 125L, 105L,
     149L, 41L, 32L, 58L, 31L, 63L, 19L, 22L, 51L, 133L, 98L, 129L, 81L, 72L, 140L, 102L,
     103L, 30L, 13L, 52L, 91L, 10L, 120L, 136L, 144L, 101L, 62L, 109L, 23L, 96L, 67L,
     4L, 117L)), s = 0.01)
     37: predictLearner(.learner, .model, .newdata, ...)
     38: predictLearner.classif.glmnet(.learner, .model, .newdata, ...)
     39: predict(.model$learner.model, newx = .newdata, type = "class", ...)
     40: predict.multnet(.model$learner.model, newx = .newdata, type = "class", ...)
     41: lambda.interp(lambda, s)
     42: approx(lambda, seq(lambda), sfrac)
     43: stop("need at least two non-NA values to interpolate")
    
     ── 6. Error: clustering performance (@test_base_clustering.R#15) ──────────────
     For learner cluster.SimpleKMeans please install the following packages: RWeka
     1: makeLearner("cluster.SimpleKMeans") at testthat/test_base_clustering.R:15
     2: do.call(constructor, list())
     3: (function ()
     {
     makeRLearnerCluster(cl = "cluster.SimpleKMeans", package = "RWeka", par.set = makeParamSet(makeUntypedLearnerParam(id = "A",
     default = "weka.core.EuclideanDistance"), makeLogicalLearnerParam(id = "C",
     default = FALSE), makeLogicalLearnerParam(id = "fast", default = FALSE),
     makeIntegerLearnerParam(id = "I", default = 100L, lower = 1L), makeIntegerLearnerParam(id = "init",
     default = 0L, lower = 0L, upper = 3L), makeLogicalLearnerParam(id = "M",
     default = FALSE), makeIntegerLearnerParam(id = "max-candidates", default = 100L,
     lower = 1L), makeIntegerLearnerParam(id = "min-density", default = 2L,
     lower = 1L), makeIntegerLearnerParam(id = "N", default = 2L, lower = 1L),
     makeIntegerLearnerParam(id = "num-slots", default = 1L, lower = 1L), makeLogicalLearnerParam(id = "O",
     default = FALSE), makeIntegerLearnerParam(id = "periodic-pruning", default = 10000L,
     lower = 1L), makeIntegerLearnerParam(id = "S", default = 10L, lower = 0L),
     makeNumericLearnerParam(id = "t2", default = -1), makeNumericLearnerParam(id = "t1",
     default = -1.5), makeLogicalLearnerParam(id = "V", default = FALSE, tunable = FALSE),
     makeLogicalLearnerParam(id = "output-debug-info", default = FALSE, tunable = FALSE)),
     properties = "numerics", name = "K-Means Clustering", short.name = "simplekmeans",
     callees = c("SimpleKMeans", "Weka_control"))
     })()
     4: makeRLearnerCluster(cl = "cluster.SimpleKMeans", package = "RWeka", par.set = makeParamSet(makeUntypedLearnerParam(id = "A",
     default = "weka.core.EuclideanDistance"), makeLogicalLearnerParam(id = "C", default = FALSE),
     makeLogicalLearnerParam(id = "fast", default = FALSE), makeIntegerLearnerParam(id = "I",
     default = 100L, lower = 1L), makeIntegerLearnerParam(id = "init", default = 0L,
     lower = 0L, upper = 3L), makeLogicalLearnerParam(id = "M", default = FALSE),
     makeIntegerLearnerParam(id = "max-candidates", default = 100L, lower = 1L), makeIntegerLearnerParam(id = "min-density",
     default = 2L, lower = 1L), makeIntegerLearnerParam(id = "N", default = 2L,
     lower = 1L), makeIntegerLearnerParam(id = "num-slots", default = 1L, lower = 1L),
     makeLogicalLearnerParam(id = "O", default = FALSE), makeIntegerLearnerParam(id = "periodic-pruning",
     default = 10000L, lower = 1L), makeIntegerLearnerParam(id = "S", default = 10L,
     lower = 0L), makeNumericLearnerParam(id = "t2", default = -1), makeNumericLearnerParam(id = "t1",
     default = -1.5), makeLogicalLearnerParam(id = "V", default = FALSE, tunable = FALSE),
     makeLogicalLearnerParam(id = "output-debug-info", default = FALSE, tunable = FALSE)),
     properties = "numerics", name = "K-Means Clustering", short.name = "simplekmeans",
     callees = c("SimpleKMeans", "Weka_control"))
     5: addClasses(makeRLearnerInternal(cl, "cluster", package, par.set, par.vals, properties,
     name, short.name, note, callees), c(cl, "RLearnerCluster"))
     6: makeRLearnerInternal(cl, "cluster", package, par.set, par.vals, properties, name,
     short.name, note, callees)
     7: requirePackages(package, why = stri_paste("learner", id, sep = " "), default.method = "load")
     8: stopf("For %s please install the following packages: %s", why, ps)
    
     ── 7. Error: clustering performance with missing clusters (@test_base_clustering
     For learner cluster.SimpleKMeans please install the following packages: RWeka
     1: makeLearner("cluster.SimpleKMeans") at testthat/test_base_clustering.R:27
     2: do.call(constructor, list())
     3: (function ()
     {
     makeRLearnerCluster(cl = "cluster.SimpleKMeans", package = "RWeka", par.set = makeParamSet(makeUntypedLearnerParam(id = "A",
     default = "weka.core.EuclideanDistance"), makeLogicalLearnerParam(id = "C",
     default = FALSE), makeLogicalLearnerParam(id = "fast", default = FALSE),
     makeIntegerLearnerParam(id = "I", default = 100L, lower = 1L), makeIntegerLearnerParam(id = "init",
     default = 0L, lower = 0L, upper = 3L), makeLogicalLearnerParam(id = "M",
     default = FALSE), makeIntegerLearnerParam(id = "max-candidates", default = 100L,
     lower = 1L), makeIntegerLearnerParam(id = "min-density", default = 2L,
     lower = 1L), makeIntegerLearnerParam(id = "N", default = 2L, lower = 1L),
     makeIntegerLearnerParam(id = "num-slots", default = 1L, lower = 1L), makeLogicalLearnerParam(id = "O",
     default = FALSE), makeIntegerLearnerParam(id = "periodic-pruning", default = 10000L,
     lower = 1L), makeIntegerLearnerParam(id = "S", default = 10L, lower = 0L),
     makeNumericLearnerParam(id = "t2", default = -1), makeNumericLearnerParam(id = "t1",
     default = -1.5), makeLogicalLearnerParam(id = "V", default = FALSE, tunable = FALSE),
     makeLogicalLearnerParam(id = "output-debug-info", default = FALSE, tunable = FALSE)),
     properties = "numerics", name = "K-Means Clustering", short.name = "simplekmeans",
     callees = c("SimpleKMeans", "Weka_control"))
     })()
     4: makeRLearnerCluster(cl = "cluster.SimpleKMeans", package = "RWeka", par.set = makeParamSet(makeUntypedLearnerParam(id = "A",
     default = "weka.core.EuclideanDistance"), makeLogicalLearnerParam(id = "C", default = FALSE),
     makeLogicalLearnerParam(id = "fast", default = FALSE), makeIntegerLearnerParam(id = "I",
     default = 100L, lower = 1L), makeIntegerLearnerParam(id = "init", default = 0L,
     lower = 0L, upper = 3L), makeLogicalLearnerParam(id = "M", default = FALSE),
     makeIntegerLearnerParam(id = "max-candidates", default = 100L, lower = 1L), makeIntegerLearnerParam(id = "min-density",
     default = 2L, lower = 1L), makeIntegerLearnerParam(id = "N", default = 2L,
     lower = 1L), makeIntegerLearnerParam(id = "num-slots", default = 1L, lower = 1L),
     makeLogicalLearnerParam(id = "O", default = FALSE), makeIntegerLearnerParam(id = "periodic-pruning",
     default = 10000L, lower = 1L), makeIntegerLearnerParam(id = "S", default = 10L,
     lower = 0L), makeNumericLearnerParam(id = "t2", default = -1), makeNumericLearnerParam(id = "t1",
     default = -1.5), makeLogicalLearnerParam(id = "V", default = FALSE, tunable = FALSE),
     makeLogicalLearnerParam(id = "output-debug-info", default = FALSE, tunable = FALSE)),
     properties = "numerics", name = "K-Means Clustering", short.name = "simplekmeans",
     callees = c("SimpleKMeans", "Weka_control"))
     5: addClasses(makeRLearnerInternal(cl, "cluster", package, par.set, par.vals, properties,
     name, short.name, note, callees), c(cl, "RLearnerCluster"))
     6: makeRLearnerInternal(cl, "cluster", package, par.set, par.vals, properties, name,
     short.name, note, callees)
     7: requirePackages(package, why = stri_paste("learner", id, sep = " "), default.method = "load")
     8: stopf("For %s please install the following packages: %s", why, ps)
    
     ── 8. Error: clustering resample (@test_base_clustering.R#41) ─────────────────
     For learner cluster.SimpleKMeans please install the following packages: RWeka
     1: makeLearner("cluster.SimpleKMeans") at testthat/test_base_clustering.R:41
     2: do.call(constructor, list())
     3: (function ()
     {
     makeRLearnerCluster(cl = "cluster.SimpleKMeans", package = "RWeka", par.set = makeParamSet(makeUntypedLearnerParam(id = "A",
     default = "weka.core.EuclideanDistance"), makeLogicalLearnerParam(id = "C",
     default = FALSE), makeLogicalLearnerParam(id = "fast", default = FALSE),
     makeIntegerLearnerParam(id = "I", default = 100L, lower = 1L), makeIntegerLearnerParam(id = "init",
     default = 0L, lower = 0L, upper = 3L), makeLogicalLearnerParam(id = "M",
     default = FALSE), makeIntegerLearnerParam(id = "max-candidates", default = 100L,
     lower = 1L), makeIntegerLearnerParam(id = "min-density", default = 2L,
     lower = 1L), makeIntegerLearnerParam(id = "N", default = 2L, lower = 1L),
     makeIntegerLearnerParam(id = "num-slots", default = 1L, lower = 1L), makeLogicalLearnerParam(id = "O",
     default = FALSE), makeIntegerLearnerParam(id = "periodic-pruning", default = 10000L,
     lower = 1L), makeIntegerLearnerParam(id = "S", default = 10L, lower = 0L),
     makeNumericLearnerParam(id = "t2", default = -1), makeNumericLearnerParam(id = "t1",
     default = -1.5), makeLogicalLearnerParam(id = "V", default = FALSE, tunable = FALSE),
     makeLogicalLearnerParam(id = "output-debug-info", default = FALSE, tunable = FALSE)),
     properties = "numerics", name = "K-Means Clustering", short.name = "simplekmeans",
     callees = c("SimpleKMeans", "Weka_control"))
     })()
     4: makeRLearnerCluster(cl = "cluster.SimpleKMeans", package = "RWeka", par.set = makeParamSet(makeUntypedLearnerParam(id = "A",
     default = "weka.core.EuclideanDistance"), makeLogicalLearnerParam(id = "C", default = FALSE),
     makeLogicalLearnerParam(id = "fast", default = FALSE), makeIntegerLearnerParam(id = "I",
     default = 100L, lower = 1L), makeIntegerLearnerParam(id = "init", default = 0L,
     lower = 0L, upper = 3L), makeLogicalLearnerParam(id = "M", default = FALSE),
     makeIntegerLearnerParam(id = "max-candidates", default = 100L, lower = 1L), makeIntegerLearnerParam(id = "min-density",
     default = 2L, lower = 1L), makeIntegerLearnerParam(id = "N", default = 2L,
     lower = 1L), makeIntegerLearnerParam(id = "num-slots", default = 1L, lower = 1L),
     makeLogicalLearnerParam(id = "O", default = FALSE), makeIntegerLearnerParam(id = "periodic-pruning",
     default = 10000L, lower = 1L), makeIntegerLearnerParam(id = "S", default = 10L,
     lower = 0L), makeNumericLearnerParam(id = "t2", default = -1), makeNumericLearnerParam(id = "t1",
     default = -1.5), makeLogicalLearnerParam(id = "V", default = FALSE, tunable = FALSE),
     makeLogicalLearnerParam(id = "output-debug-info", default = FALSE, tunable = FALSE)),
     properties = "numerics", name = "K-Means Clustering", short.name = "simplekmeans",
     callees = c("SimpleKMeans", "Weka_control"))
     5: addClasses(makeRLearnerInternal(cl, "cluster", package, par.set, par.vals, properties,
     name, short.name, note, callees), c(cl, "RLearnerCluster"))
     6: makeRLearnerInternal(cl, "cluster", package, par.set, par.vals, properties, name,
     short.name, note, callees)
     7: requirePackages(package, why = stri_paste("learner", id, sep = " "), default.method = "load")
     8: stopf("For %s please install the following packages: %s", why, ps)
    
     ── 9. Error: clustering benchmark (@test_base_clustering.R#52) ────────────────
     For learner cluster.SimpleKMeans please install the following packages: RWeka
     1: lapply(learner.names, makeLearner) at testthat/test_base_clustering.R:52
     2: FUN(X[[i]], ...)
     3: do.call(constructor, list())
     4: (function ()
     {
     makeRLearnerCluster(cl = "cluster.SimpleKMeans", package = "RWeka", par.set = makeParamSet(makeUntypedLearnerParam(id = "A",
     default = "weka.core.EuclideanDistance"), makeLogicalLearnerParam(id = "C",
     default = FALSE), makeLogicalLearnerParam(id = "fast", default = FALSE),
     makeIntegerLearnerParam(id = "I", default = 100L, lower = 1L), makeIntegerLearnerParam(id = "init",
     default = 0L, lower = 0L, upper = 3L), makeLogicalLearnerParam(id = "M",
     default = FALSE), makeIntegerLearnerParam(id = "max-candidates", default = 100L,
     lower = 1L), makeIntegerLearnerParam(id = "min-density", default = 2L,
     lower = 1L), makeIntegerLearnerParam(id = "N", default = 2L, lower = 1L),
     makeIntegerLearnerParam(id = "num-slots", default = 1L, lower = 1L), makeLogicalLearnerParam(id = "O",
     default = FALSE), makeIntegerLearnerParam(id = "periodic-pruning", default = 10000L,
     lower = 1L), makeIntegerLearnerParam(id = "S", default = 10L, lower = 0L),
     makeNumericLearnerParam(id = "t2", default = -1), makeNumericLearnerParam(id = "t1",
     default = -1.5), makeLogicalLearnerParam(id = "V", default = FALSE, tunable = FALSE),
     makeLogicalLearnerParam(id = "output-debug-info", default = FALSE, tunable = FALSE)),
     properties = "numerics", name = "K-Means Clustering", short.name = "simplekmeans",
     callees = c("SimpleKMeans", "Weka_control"))
     })()
     5: makeRLearnerCluster(cl = "cluster.SimpleKMeans", package = "RWeka", par.set = makeParamSet(makeUntypedLearnerParam(id = "A",
     default = "weka.core.EuclideanDistance"), makeLogicalLearnerParam(id = "C", default = FALSE),
     makeLogicalLearnerParam(id = "fast", default = FALSE), makeIntegerLearnerParam(id = "I",
     default = 100L, lower = 1L), makeIntegerLearnerParam(id = "init", default = 0L,
     lower = 0L, upper = 3L), makeLogicalLearnerParam(id = "M", default = FALSE),
     makeIntegerLearnerParam(id = "max-candidates", default = 100L, lower = 1L), makeIntegerLearnerParam(id = "min-density",
     default = 2L, lower = 1L), makeIntegerLearnerParam(id = "N", default = 2L,
     lower = 1L), makeIntegerLearnerParam(id = "num-slots", default = 1L, lower = 1L),
     makeLogicalLearnerParam(id = "O", default = FALSE), makeIntegerLearnerParam(id = "periodic-pruning",
     default = 10000L, lower = 1L), makeIntegerLearnerParam(id = "S", default = 10L,
     lower = 0L), makeNumericLearnerParam(id = "t2", default = -1), makeNumericLearnerParam(id = "t1",
     default = -1.5), makeLogicalLearnerParam(id = "V", default = FALSE, tunable = FALSE),
     makeLogicalLearnerParam(id = "output-debug-info", default = FALSE, tunable = FALSE)),
     properties = "numerics", name = "K-Means Clustering", short.name = "simplekmeans",
     callees = c("SimpleKMeans", "Weka_control"))
     6: addClasses(makeRLearnerInternal(cl, "cluster", package, par.set, par.vals, properties,
     name, short.name, note, callees), c(cl, "RLearnerCluster"))
     7: makeRLearnerInternal(cl, "cluster", package, par.set, par.vals, properties, name,
     short.name, note, callees)
     8: requirePackages(package, why = stri_paste("learner", id, sep = " "), default.method = "load")
     9: stopf("For %s please install the following packages: %s", why, ps)
    
     ── 10. Error: clustering tune (@test_base_clustering.R#65) ────────────────────
     For learner cluster.SimpleKMeans please install the following packages: RWeka
     1: makeLearner("cluster.SimpleKMeans") at testthat/test_base_clustering.R:65
     2: do.call(constructor, list())
     3: (function ()
     {
     makeRLearnerCluster(cl = "cluster.SimpleKMeans", package = "RWeka", par.set = makeParamSet(makeUntypedLearnerParam(id = "A",
     default = "weka.core.EuclideanDistance"), makeLogicalLearnerParam(id = "C",
     default = FALSE), makeLogicalLearnerParam(id = "fast", default = FALSE),
     makeIntegerLearnerParam(id = "I", default = 100L, lower = 1L), makeIntegerLearnerParam(id = "init",
     default = 0L, lower = 0L, upper = 3L), makeLogicalLearnerParam(id = "M",
     default = FALSE), makeIntegerLearnerParam(id = "max-candidates", default = 100L,
     lower = 1L), makeIntegerLearnerParam(id = "min-density", default = 2L,
     lower = 1L), makeIntegerLearnerParam(id = "N", default = 2L, lower = 1L),
     makeIntegerLearnerParam(id = "num-slots", default = 1L, lower = 1L), makeLogicalLearnerParam(id = "O",
     default = FALSE), makeIntegerLearnerParam(id = "periodic-pruning", default = 10000L,
     lower = 1L), makeIntegerLearnerParam(id = "S", default = 10L, lower = 0L),
     makeNumericLearnerParam(id = "t2", default = -1), makeNumericLearnerParam(id = "t1",
     default = -1.5), makeLogicalLearnerParam(id = "V", default = FALSE, tunable = FALSE),
     makeLogicalLearnerParam(id = "output-debug-info", default = FALSE, tunable = FALSE)),
     properties = "numerics", name = "K-Means Clustering", short.name = "simplekmeans",
     callees = c("SimpleKMeans", "Weka_control"))
     })()
     4: makeRLearnerCluster(cl = "cluster.SimpleKMeans", package = "RWeka", par.set = makeParamSet(makeUntypedLearnerParam(id = "A",
     default = "weka.core.EuclideanDistance"), makeLogicalLearnerParam(id = "C", default = FALSE),
     makeLogicalLearnerParam(id = "fast", default = FALSE), makeIntegerLearnerParam(id = "I",
     default = 100L, lower = 1L), makeIntegerLearnerParam(id = "init", default = 0L,
     lower = 0L, upper = 3L), makeLogicalLearnerParam(id = "M", default = FALSE),
     makeIntegerLearnerParam(id = "max-candidates", default = 100L, lower = 1L), makeIntegerLearnerParam(id = "min-density",
     default = 2L, lower = 1L), makeIntegerLearnerParam(id = "N", default = 2L,
     lower = 1L), makeIntegerLearnerParam(id = "num-slots", default = 1L, lower = 1L),
     makeLogicalLearnerParam(id = "O", default = FALSE), makeIntegerLearnerParam(id = "periodic-pruning",
     default = 10000L, lower = 1L), makeIntegerLearnerParam(id = "S", default = 10L,
     lower = 0L), makeNumericLearnerParam(id = "t2", default = -1), makeNumericLearnerParam(id = "t1",
     default = -1.5), makeLogicalLearnerParam(id = "V", default = FALSE, tunable = FALSE),
     makeLogicalLearnerParam(id = "output-debug-info", default = FALSE, tunable = FALSE)),
     properties = "numerics", name = "K-Means Clustering", short.name = "simplekmeans",
     callees = c("SimpleKMeans", "Weka_control"))
     5: addClasses(makeRLearnerInternal(cl, "cluster", package, par.set, par.vals, properties,
     name, short.name, note, callees), c(cl, "RLearnerCluster"))
     6: makeRLearnerInternal(cl, "cluster", package, par.set, par.vals, properties, name,
     short.name, note, callees)
     7: requirePackages(package, why = stri_paste("learner", id, sep = " "), default.method = "load")
     8: stopf("For %s please install the following packages: %s", why, ps)
    
     ── 11. Error: plotFilterValues (@test_base_generateFilterValuesData.R#68) ─────
     there is no package called 'FSelector'
     1: generateFilterValuesData(binaryclass.task, method = filter.classif) at testthat/test_base_generateFilterValuesData.R:68
     2: lapply(filter, function(x) {
     x = do.call(x$fun, c(list(task = task, nselect = nselect), more.args[[x$name]]))
     missing.score = setdiff(fn, names(x))
     x[missing.score] = NA_real_
     x[match(fn, names(x))]
     })
     3: FUN(X[[i]], ...)
     4: do.call(x$fun, c(list(task = task, nselect = nselect), more.args[[x$name]]))
     5: (function (task, nselect, ...)
     {
     y = FSelector::chi.squared(getTaskFormula(task), data = getTaskData(task))
     setNames(y[["attr_importance"]], getTaskFeatureNames(task))
     })(task = structure(list(type = "classif", env = <environment>, weights = NULL, blocking = NULL,
     coordinates = NULL, task.desc = structure(list(id = "binary", type = "classif",
     target = "Class", size = 208L, n.feat = c(numerics = 60L, factors = 0L, ordered = 0L,
     functionals = 0L), has.missings = FALSE, has.weights = FALSE, has.blocking = FALSE,
     has.coordinates = FALSE, class.levels = c("M", "R"), positive = "M", negative = "R",
     class.distribution = structure(c(M = 111L, R = 97L), .Dim = 2L, .Dimnames = structure(list(
     c("M", "R")), .Names = ""), class = "table")), class = c("ClassifTaskDesc",
     "SupervisedTaskDesc", "TaskDesc"))), class = c("ClassifTask", "SupervisedTask",
     "Task")), nselect = 60L)
     6: FSelector::chi.squared
     7: getExportedValue(pkg, name)
     8: asNamespace(ns)
     9: getNamespace(ns)
     10: tryCatch(loadNamespace(name), error = function(e) stop(e))
     11: tryCatchList(expr, classes, parentenv, handlers)
     12: tryCatchOne(expr, names, parentenv, handlers[[1L]])
     13: value[[3L]](cond)
    
     ── 12. Error: getFeatureImportance (@test_base_getFeatureImportance.R#33) ─────
     For learner regr.gbm.filtered please install the following packages: FSelector
     1: train(lrn, regr.task) at testthat/test_base_getFeatureImportance.R:33
     2: requireLearnerPackages(learner)
     3: requirePackages(learner$package, why = stri_paste("learner", learner$id, sep = " "),
     default.method = "load")
     4: stopf("For %s please install the following packages: %s", why, ps)
    
     ── 13. Error: getOOBPreds (@test_base_getOOBPreds.R#14) ───────────────────────
     For learner classif.randomForest.filtered please install the following packages: FSelector
     1: train(lrn, task) at testthat/test_base_getOOBPreds.R:14
     2: requireLearnerPackages(learner)
     3: requirePackages(learner$package, why = stri_paste("learner", learner$id, sep = " "),
     default.method = "load")
     4: stopf("For %s please install the following packages: %s", why, ps)
    
     ── 14. Error: hyperpars (@test_base_hyperpars.R#12) ───────────────────────────
     For learner classif.J48 please install the following packages: RWeka
     1: makeLearner("classif.J48", C = 0.5) at testthat/test_base_hyperpars.R:12
     2: do.call(constructor, list())
     3: (function ()
     {
     makeRLearnerClassif(cl = "classif.J48", package = "RWeka", par.set = makeParamSet(makeLogicalLearnerParam(id = "U",
     default = FALSE), makeLogicalLearnerParam(id = "O", default = FALSE), makeNumericLearnerParam(id = "C",
     default = 0.25, lower = .Machine$double.eps, upper = 1 - .Machine$double.eps,
     requires = quote(!U && !R)), makeIntegerLearnerParam(id = "M", default = 2L,
     lower = 1L), makeLogicalLearnerParam(id = "R", default = FALSE, requires = quote(!U)),
     makeIntegerLearnerParam(id = "N", default = 3L, lower = 2L, requires = quote(!U &&
     R)), makeLogicalLearnerParam(id = "B", default = FALSE), makeLogicalLearnerParam(id = "S",
     default = FALSE, requires = quote(!U)), makeLogicalLearnerParam(id = "L",
     default = FALSE), makeLogicalLearnerParam(id = "A", default = FALSE),
     makeLogicalLearnerParam(id = "J", default = FALSE), makeIntegerLearnerParam(id = "Q",
     tunable = FALSE), makeLogicalLearnerParam(id = "output-debug-info", default = FALSE,
     tunable = FALSE)), properties = c("twoclass", "multiclass", "missings",
     "numerics", "factors", "prob"), name = "J48 Decision Trees", short.name = "j48",
     note = "NAs are directly passed to WEKA with `na.action = na.pass`.", callees = c("J48",
     "Weka_control"))
     })()
     4: makeRLearnerClassif(cl = "classif.J48", package = "RWeka", par.set = makeParamSet(makeLogicalLearnerParam(id = "U",
     default = FALSE), makeLogicalLearnerParam(id = "O", default = FALSE), makeNumericLearnerParam(id = "C",
     default = 0.25, lower = .Machine$double.eps, upper = 1 - .Machine$double.eps,
     requires = quote(!U && !R)), makeIntegerLearnerParam(id = "M", default = 2L,
     lower = 1L), makeLogicalLearnerParam(id = "R", default = FALSE, requires = quote(!U)),
     makeIntegerLearnerParam(id = "N", default = 3L, lower = 2L, requires = quote(!U &&
     R)), makeLogicalLearnerParam(id = "B", default = FALSE), makeLogicalLearnerParam(id = "S",
     default = FALSE, requires = quote(!U)), makeLogicalLearnerParam(id = "L",
     default = FALSE), makeLogicalLearnerParam(id = "A", default = FALSE), makeLogicalLearnerParam(id = "J",
     default = FALSE), makeIntegerLearnerParam(id = "Q", tunable = FALSE), makeLogicalLearnerParam(id = "output-debug-info",
     default = FALSE, tunable = FALSE)), properties = c("twoclass", "multiclass",
     "missings", "numerics", "factors", "prob"), name = "J48 Decision Trees", short.name = "j48",
     note = "NAs are directly passed to WEKA with `na.action = na.pass`.", callees = c("J48",
     "Weka_control"))
     5: addClasses(makeRLearnerInternal(cl, "classif", package, par.set, par.vals, properties,
     name, short.name, note, callees), c(cl, "RLearnerClassif"))
     6: makeRLearnerInternal(cl, "classif", package, par.set, par.vals, properties, name,
     short.name, note, callees)
     7: requirePackages(package, why = stri_paste("learner", id, sep = " "), default.method = "load")
     8: stopf("For %s please install the following packages: %s", why, ps)
    
     ── 15. Error: check measure calculations (@test_base_measures.R#206) ──────────
     For learner cluster.EM please install the following packages: RWeka
     1: makeLearner("cluster.EM") at testthat/test_base_measures.R:206
     2: do.call(constructor, list())
     3: (function ()
     {
     makeRLearnerCluster(cl = "cluster.EM", package = "RWeka", par.set = makeParamSet(makeIntegerLearnerParam(id = "I",
     default = 100L, lower = 1L), makeNumericLearnerParam(id = "ll-cv", default = 1e-06,
     lower = 1e-06), makeNumericLearnerParam(id = "ll-iter", default = 1e-06,
     lower = 1e-06), makeNumericLearnerParam(id = "M", default = 1e-06, lower = 1e-06),
     makeIntegerLearnerParam(id = "max", default = -1L, lower = -1L), makeIntegerLearnerParam(id = "N",
     default = -1L, lower = -1L), makeIntegerLearnerParam(id = "num-slots",
     default = 1L, lower = 1L), makeIntegerLearnerParam(id = "S", default = 100L,
     lower = 0L), makeIntegerLearnerParam(id = "X", default = 10L, lower = 1L),
     makeIntegerLearnerParam(id = "K", default = 10L, lower = 1L), makeLogicalLearnerParam(id = "V",
     default = FALSE, tunable = FALSE), makeLogicalLearnerParam(id = "output-debug-info",
     default = FALSE, tunable = FALSE)), properties = "numerics", name = "Expectation-Maximization Clustering",
     short.name = "em", callees = c("make_Weka_clusterer", "Weka_control"))
     })()
     4: makeRLearnerCluster(cl = "cluster.EM", package = "RWeka", par.set = makeParamSet(makeIntegerLearnerParam(id = "I",
     default = 100L, lower = 1L), makeNumericLearnerParam(id = "ll-cv", default = 1e-06,
     lower = 1e-06), makeNumericLearnerParam(id = "ll-iter", default = 1e-06, lower = 1e-06),
     makeNumericLearnerParam(id = "M", default = 1e-06, lower = 1e-06), makeIntegerLearnerParam(id = "max",
     default = -1L, lower = -1L), makeIntegerLearnerParam(id = "N", default = -1L,
     lower = -1L), makeIntegerLearnerParam(id = "num-slots", default = 1L, lower = 1L),
     makeIntegerLearnerParam(id = "S", default = 100L, lower = 0L), makeIntegerLearnerParam(id = "X",
     default = 10L, lower = 1L), makeIntegerLearnerParam(id = "K", default = 10L,
     lower = 1L), makeLogicalLearnerParam(id = "V", default = FALSE, tunable = FALSE),
     makeLogicalLearnerParam(id = "output-debug-info", default = FALSE, tunable = FALSE)),
     properties = "numerics", name = "Expectation-Maximization Clustering", short.name = "em",
     callees = c("make_Weka_clusterer", "Weka_control"))
     5: addClasses(makeRLearnerInternal(cl, "cluster", package, par.set, par.vals, properties,
     name, short.name, note, callees), c(cl, "RLearnerCluster"))
     6: makeRLearnerInternal(cl, "cluster", package, par.set, par.vals, properties, name,
     short.name, note, callees)
     7: requirePackages(package, why = stri_paste("learner", id, sep = " "), default.method = "load")
     8: stopf("For %s please install the following packages: %s", why, ps)
    
     ── 17. Error: MultilabelBinaryRelevanceWrapper with glmnet (#958) (@test_base_mu
     missing value where TRUE/FALSE needed
     1: train(lrn2, multilabel.task) at testthat/test_base_multilabel.R:59
     2: measureTime(fun1({
     learner.model = fun2(fun3(do.call(trainLearner, pars)))
     }))
     3: force(expr)
     4: fun1({
     learner.model = fun2(fun3(do.call(trainLearner, pars)))
     })
     5: evalVis(expr)
     6: withVisible(eval(expr, pf))
     7: eval(expr, pf)
     8: eval(expr, pf)
     9: fun2(fun3(do.call(trainLearner, pars)))
     10: fun3(do.call(trainLearner, pars))
     ...
     25: fun2(fun3(do.call(trainLearner, pars)))
     26: fun3(do.call(trainLearner, pars))
     27: do.call(trainLearner, pars)
     28: (function (.learner, .task, .subset, .weights = NULL, ...)
     {
     UseMethod("trainLearner")
     })(.learner = structure(list(id = "classif.glmnet", type = "classif", package = "glmnet",
     properties = c("numerics", "factors", "prob", "twoclass", "multiclass", "weights"
     ), par.set = structure(list(pars = list(alpha = structure(list(id = "alpha",
     type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = TRUE, default = 1, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), s = structure(list(id = "s", type = "numeric", len = 1L, lower = 0,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "predict"), class = c("LearnerParam", "Param")), exact = structure(list(
     id = "exact", type = "logical", len = 1L, lower = NULL, upper = NULL, values = list(
     `TRUE` = TRUE, `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = FALSE, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "predict"), class = c("LearnerParam", "Param")), nlambda = structure(list(
     id = "nlambda", type = "integer", len = 1L, lower = 1L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 100L, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), lambda.min.ratio = structure(list(id = "lambda.min.ratio", type = "numeric",
     len = 1L, lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), lambda = structure(list(id = "lambda", type = "numericvector", len = NA_integer_,
     lower = 0, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), standardize = structure(list(id = "standardize", type = "logical", len = 1L,
     lower = NULL, upper = NULL, values = list(`TRUE` = TRUE, `FALSE` = FALSE),
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = TRUE, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), intercept = structure(list(id = "intercept", type = "logical", len = 1L,
     lower = NULL, upper = NULL, values = list(`TRUE` = TRUE, `FALSE` = FALSE),
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = TRUE, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), thresh = structure(list(id = "thresh", type = "numeric", len = 1L,
     lower = 0, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-07, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), dfmax = structure(list(id = "dfmax", type = "integer", len = 1L, lower = 0L,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), pmax = structure(list(
     id = "pmax", type = "integer", len = 1L, lower = 0L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), exclude = structure(list(id = "exclude", type = "integervector", len = NA_integer_,
     lower = 1L, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), penalty.factor = structure(list(id = "penalty.factor", type = "numericvector",
     len = NA_integer_, lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), lower.limits = structure(list(id = "lower.limits", type = "numericvector",
     len = NA_integer_, lower = -Inf, upper = 0, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), upper.limits = structure(list(id = "upper.limits", type = "numericvector",
     len = NA_integer_, lower = 0, upper = Inf, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), maxit = structure(list(id = "maxit", type = "integer", len = 1L, lower = 1L,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 100000L, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), type.logistic = structure(list(
     id = "type.logistic", type = "discrete", len = 1L, lower = NULL, upper = NULL,
     values = list(Newton = "Newton", modified.Newton = "modified.Newton"), cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), type.multinomial = structure(list(id = "type.multinomial", type = "discrete",
     len = 1L, lower = NULL, upper = NULL, values = list(ungrouped = "ungrouped",
     grouped = "grouped"), cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), fdev = structure(list(
     id = "fdev", type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1e-05, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), devmax = structure(list(id = "devmax", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 0.999, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), eps = structure(list(
     id = "eps", type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1e-06, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), big = structure(list(id = "big", type = "numeric", len = 1L, lower = -Inf,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 9.9e+35, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), mnlam = structure(list(
     id = "mnlam", type = "integer", len = 1L, lower = 1, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 5, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), pmin = structure(list(id = "pmin", type = "numeric", len = 1L, lower = 0,
     upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 1e-09, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), exmx = structure(list(
     id = "exmx", type = "numeric", len = 1L, lower = -Inf, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 250, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), prec = structure(list(id = "prec", type = "numeric", len = 1L, lower = -Inf,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 1e-10, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), mxit = structure(list(
     id = "mxit", type = "integer", len = 1L, lower = 1L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 100L, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param"))), forbidden = NULL), class = c("LearnerParamSet", "ParamSet")), par.vals = list(
     s = 0.01), predict.type = "response", name = "GLM with Lasso or Elasticnet Regularization",
     short.name = "glmnet", note = "The family parameter is set to `binomial` for two-class problems and to `multinomial` otherwise.\n Factors automatically get converted to dummy columns, ordered factors to integer.\n Parameter `s` (value of the regularization parameter used for predictions) is set to `0.1` by default,\n but needs to be tuned by the user.\n glmnet uses a global control object for its parameters. mlr resets all control parameters to their defaults\n before setting the specified parameters and after training.\n If you are setting glmnet.control parameters through glmnet.control,\n you need to save and re-set them after running the glmnet learner.",
     callees = c("glmnet", "glmnet.control", "predict.glmnet"), help.list = list(s = "Argument of: glmnet::predict.glmnet\n\nValue(s) of the penalty parameter lambda at which predictions are required. Default is the entire sequence used to create the model.",
     exact = "Argument of: glmnet::predict.glmnet\n\nThis argument is relevant only when predictions are made at values of s (lambda) different from those used in the fitting of the original model. If exact=FALSE (default), then the predict function uses linear interpolation to make predictions for values of s (lambda) that do not coincide with those used in the fitting algorithm. While this is often a good approximation, it can sometimes be a bit coarse. With exact=TRUE, these different values of s are merged (and sorted) with object$lambda, and the model is refit before predictions are made. In this case, it is required to supply the original data x= and y= as additional named arguments to predict() or coef(). The workhorse predict.glmnet() needs to update the model, and so needs the data used to create it. The same is true of weights, offset, penalty.factor, lower.limits, upper.limits if these were used in the original call. Failure to do so will result in an error.",
     fdev = "Argument of: glmnet::glmnet.control\n\nminimum fractional change in deviance for stopping path; factory default = 1.0e-5",
     devmax = "Argument of: glmnet::glmnet.control\n\nmaximum fraction of explained deviance for stopping path; factory default = 0.999",
     eps = "Argument of: glmnet::glmnet.control\n\nminimum value of lambda.min.ratio (see glmnet); factory default= 1.0e-6",
     big = "Argument of: glmnet::glmnet.control\n\nlarge floating point number; factory default = 9.9e35. Inf in definition of upper.limit is set to big",
     mnlam = "Argument of: glmnet::glmnet.control\n\nminimum number of path points (lambda values) allowed; factory default = 5",
     pmin = "Argument of: glmnet::glmnet.control\n\nminimum probability for any class. factory default = 1.0e-9. Note that this implies a pmax of 1-pmin.",
     exmx = "Argument of: glmnet::glmnet.control\n\nmaximum allowed exponent. factory default = 250.0",
     prec = "Argument of: glmnet::glmnet.control\n\nconvergence threshold for multi response bounds adjustment solution. factory default = 1.0e-10",
     mxit = "Argument of: glmnet::glmnet.control\n\nmaximum iterations for multiresponse bounds adjustment solution. factory default = 100",
     alpha = "Argument of: glmnet::glmnet\n\nThe elasticnet mixing parameter, with 0≤α≤ 1. The penalty is defined as (1-α)/2||β||_2^2+α||β||_1. alpha=1 is the lasso penalty, and alpha=0 the ridge penalty.",
     nlambda = "Argument of: glmnet::glmnet\n\nThe number of lambda values - default is 100.",
     lambda.min.ratio = "Argument of: glmnet::glmnet\n\nSmallest value for lambda, as a fraction of lambda.max, the (data derived) entry value (i.e. the smallest value for which all coefficients are zero). The default depends on the sample size nobs relative to the number of variables nvars. If nobs > nvars, the default is 0.0001, close to zero. If nobs < nvars, the default is 0.01. A very small value of lambda.min.ratio will lead to a saturated fit in the nobs < nvars case. This is undefined for \"binomial\" and \"multinomial\" models, and glmnet will exit gracefully when the percentage deviance explained is almost 1.",
     lambda = "Argument of: glmnet::glmnet\n\nA user supplied lambda sequence. Typical usage is to have the program compute its own lambda sequence based on nlambda and lambda.min.ratio. Supplying a value of lambda overrides this. WARNING: use with care. Avoid supplying a single value for lambda (for predictions after CV use predict() instead). Supply instead a decreasing sequence of lambda values. glmnet relies on its warms starts for speed, and its often faster to fit a whole path than compute a single fit.",
     standardize = "Argument of: glmnet::glmnet\n\nLogical flag for x variable standardization, prior to fitting the model sequence. The coefficients are always returned on the original scale. Default is standardize=TRUE. If variables are in the same units already, you might not wish to standardize. See details below for y standardization with family=\"gaussian\".",
     intercept = "Argument of: glmnet::glmnet\n\nShould intercept(s) be fitted (default=TRUE) or set to zero (FALSE)",
     thresh = "Argument of: glmnet::glmnet\n\nConvergence threshold for coordinate descent. Each inner coordinate-descent loop continues until the maximum change in the objective after any coefficient update is less than thresh times the null deviance. Defaults value is 1E-7.",
     dfmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables in the model. Useful for very large nvars, if a partial path is desired.",
     pmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables ever to be nonzero",
     exclude = "Argument of: glmnet::glmnet\n\nIndices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor (next item).",
     penalty.factor = "Argument of: glmnet::glmnet\n\nSeparate penalty factors can be applied to each coefficient. This is a number that multiplies lambda to allow differential shrinkage. Can be 0 for some variables, which implies no shrinkage, and that variable is always included in the model. Default is 1 for all variables (and implicitly infinity for variables listed in exclude). Note: the penalty factors are internally rescaled to sum to nvars, and the lambda sequence will reflect this change.",
     lower.limits = "Argument of: glmnet::glmnet\n\nVector of lower limits for each coefficient; default -Inf. Each of these must be non-positive. Can be presented as a single value (which will then be replicated), else a vector of length nvars",
     upper.limits = "Argument of: glmnet::glmnet\n\nVector of upper limits for each coefficient; default Inf. See lower.limits",
     maxit = "Argument of: glmnet::glmnet\n\nMaximum number of passes over the data for all lambda values; default is 10^5.",
     type.logistic = "Argument of: glmnet::glmnet\n\nIf \"Newton\" then the exact hessian is used (default), while \"modified.Newton\" uses an upper-bound on the hessian, and can be faster.",
     type.multinomial = "Argument of: glmnet::glmnet\n\nIf \"grouped\" then a grouped lasso penalty is used on the multinomial coefficients for a variable. This ensures they are all in our out together. The default is \"ungrouped\""),
     config = list(), fix.factors.prediction = FALSE), class = c("classif.glmnet",
     "RLearnerClassif", "RLearner", "Learner")), .task = structure(list(type = "classif",
     env = <environment>, weights = NULL, blocking = NULL, coordinates = NULL, task.desc = structure(list(
     id = "y1", type = "classif", target = "y1", size = 150L, n.feat = c(numerics = 4L,
     factors = 1L, ordered = 0L, functionals = 0L), has.missings = FALSE, has.weights = FALSE,
     has.blocking = FALSE, has.coordinates = FALSE, class.levels = c("FALSE",
     "TRUE"), positive = "FALSE", negative = "TRUE", class.distribution = structure(c(`FALSE` = 75L,
     `TRUE` = 75L), .Dim = 2L, .Dimnames = structure(list(c("FALSE", "TRUE")), .Names = ""), class = "table")), class = c("ClassifTaskDesc",
     "SupervisedTaskDesc", "TaskDesc"))), class = c("ClassifTask", "SupervisedTask",
     "Task")), .subset = NULL)
     29: trainLearner.classif.glmnet(.learner = structure(list(id = "classif.glmnet", type = "classif",
     package = "glmnet", properties = c("numerics", "factors", "prob", "twoclass",
     "multiclass", "weights"), par.set = structure(list(pars = list(alpha = structure(list(
     id = "alpha", type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), s = structure(list(id = "s", type = "numeric", len = 1L, lower = 0,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "predict"), class = c("LearnerParam", "Param")), exact = structure(list(
     id = "exact", type = "logical", len = 1L, lower = NULL, upper = NULL, values = list(
     `TRUE` = TRUE, `FALSE` = FALSE), cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = FALSE, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "predict"), class = c("LearnerParam", "Param")), nlambda = structure(list(
     id = "nlambda", type = "integer", len = 1L, lower = 1L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 100L, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), lambda.min.ratio = structure(list(id = "lambda.min.ratio", type = "numeric",
     len = 1L, lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), lambda = structure(list(id = "lambda", type = "numericvector", len = NA_integer_,
     lower = 0, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), standardize = structure(list(id = "standardize", type = "logical", len = 1L,
     lower = NULL, upper = NULL, values = list(`TRUE` = TRUE, `FALSE` = FALSE),
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = TRUE, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), intercept = structure(list(id = "intercept", type = "logical", len = 1L,
     lower = NULL, upper = NULL, values = list(`TRUE` = TRUE, `FALSE` = FALSE),
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = TRUE, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), thresh = structure(list(id = "thresh", type = "numeric", len = 1L,
     lower = 0, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = TRUE, default = 1e-07, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), dfmax = structure(list(id = "dfmax", type = "integer", len = 1L, lower = 0L,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), pmax = structure(list(
     id = "pmax", type = "integer", len = 1L, lower = 0L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), exclude = structure(list(id = "exclude", type = "integervector", len = NA_integer_,
     lower = 1L, upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), penalty.factor = structure(list(id = "penalty.factor", type = "numericvector",
     len = NA_integer_, lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE,
     has.default = FALSE, default = NULL, trafo = NULL, requires = NULL, tunable = TRUE,
     special.vals = list(), when = "train"), class = c("LearnerParam", "Param"
     )), lower.limits = structure(list(id = "lower.limits", type = "numericvector",
     len = NA_integer_, lower = -Inf, upper = 0, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), upper.limits = structure(list(id = "upper.limits", type = "numericvector",
     len = NA_integer_, lower = 0, upper = Inf, values = NULL, cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), maxit = structure(list(id = "maxit", type = "integer", len = 1L, lower = 1L,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 100000L, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), type.logistic = structure(list(
     id = "type.logistic", type = "discrete", len = 1L, lower = NULL, upper = NULL,
     values = list(Newton = "Newton", modified.Newton = "modified.Newton"), cnames = NULL,
     allow.inf = FALSE, has.default = FALSE, default = NULL, trafo = NULL, requires = NULL,
     tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), type.multinomial = structure(list(id = "type.multinomial", type = "discrete",
     len = 1L, lower = NULL, upper = NULL, values = list(ungrouped = "ungrouped",
     grouped = "grouped"), cnames = NULL, allow.inf = FALSE, has.default = FALSE,
     default = NULL, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), fdev = structure(list(
     id = "fdev", type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1e-05, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), devmax = structure(list(id = "devmax", type = "numeric", len = 1L,
     lower = 0, upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 0.999, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), eps = structure(list(
     id = "eps", type = "numeric", len = 1L, lower = 0, upper = 1, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 1e-06, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), big = structure(list(id = "big", type = "numeric", len = 1L, lower = -Inf,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 9.9e+35, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), mnlam = structure(list(
     id = "mnlam", type = "integer", len = 1L, lower = 1, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 5, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), pmin = structure(list(id = "pmin", type = "numeric", len = 1L, lower = 0,
     upper = 1, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 1e-09, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), exmx = structure(list(
     id = "exmx", type = "numeric", len = 1L, lower = -Inf, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 250, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param")), prec = structure(list(id = "prec", type = "numeric", len = 1L, lower = -Inf,
     upper = Inf, values = NULL, cnames = NULL, allow.inf = FALSE, has.default = TRUE,
     default = 1e-10, trafo = NULL, requires = NULL, tunable = TRUE, special.vals = list(),
     when = "train"), class = c("LearnerParam", "Param")), mxit = structure(list(
     id = "mxit", type = "integer", len = 1L, lower = 1L, upper = Inf, values = NULL,
     cnames = NULL, allow.inf = FALSE, has.default = TRUE, default = 100L, trafo = NULL,
     requires = NULL, tunable = TRUE, special.vals = list(), when = "train"), class = c("LearnerParam",
     "Param"))), forbidden = NULL), class = c("LearnerParamSet", "ParamSet")), par.vals = list(
     s = 0.01), predict.type = "response", name = "GLM with Lasso or Elasticnet Regularization",
     short.name = "glmnet", note = "The family parameter is set to `binomial` for two-class problems and to `multinomial` otherwise.\n Factors automatically get converted to dummy columns, ordered factors to integer.\n Parameter `s` (value of the regularization parameter used for predictions) is set to `0.1` by default,\n but needs to be tuned by the user.\n glmnet uses a global control object for its parameters. mlr resets all control parameters to their defaults\n before setting the specified parameters and after training.\n If you are setting glmnet.control parameters through glmnet.control,\n you need to save and re-set them after running the glmnet learner.",
     callees = c("glmnet", "glmnet.control", "predict.glmnet"), help.list = list(s = "Argument of: glmnet::predict.glmnet\n\nValue(s) of the penalty parameter lambda at which predictions are required. Default is the entire sequence used to create the model.",
     exact = "Argument of: glmnet::predict.glmnet\n\nThis argument is relevant only when predictions are made at values of s (lambda) different from those used in the fitting of the original model. If exact=FALSE (default), then the predict function uses linear interpolation to make predictions for values of s (lambda) that do not coincide with those used in the fitting algorithm. While this is often a good approximation, it can sometimes be a bit coarse. With exact=TRUE, these different values of s are merged (and sorted) with object$lambda, and the model is refit before predictions are made. In this case, it is required to supply the original data x= and y= as additional named arguments to predict() or coef(). The workhorse predict.glmnet() needs to update the model, and so needs the data used to create it. The same is true of weights, offset, penalty.factor, lower.limits, upper.limits if these were used in the original call. Failure to do so will result in an error.",
     fdev = "Argument of: glmnet::glmnet.control\n\nminimum fractional change in deviance for stopping path; factory default = 1.0e-5",
     devmax = "Argument of: glmnet::glmnet.control\n\nmaximum fraction of explained deviance for stopping path; factory default = 0.999",
     eps = "Argument of: glmnet::glmnet.control\n\nminimum value of lambda.min.ratio (see glmnet); factory default= 1.0e-6",
     big = "Argument of: glmnet::glmnet.control\n\nlarge floating point number; factory default = 9.9e35. Inf in definition of upper.limit is set to big",
     mnlam = "Argument of: glmnet::glmnet.control\n\nminimum number of path points (lambda values) allowed; factory default = 5",
     pmin = "Argument of: glmnet::glmnet.control\n\nminimum probability for any class. factory default = 1.0e-9. Note that this implies a pmax of 1-pmin.",
     exmx = "Argument of: glmnet::glmnet.control\n\nmaximum allowed exponent. factory default = 250.0",
     prec = "Argument of: glmnet::glmnet.control\n\nconvergence threshold for multi response bounds adjustment solution. factory default = 1.0e-10",
     mxit = "Argument of: glmnet::glmnet.control\n\nmaximum iterations for multiresponse bounds adjustment solution. factory default = 100",
     alpha = "Argument of: glmnet::glmnet\n\nThe elasticnet mixing parameter, with 0≤α≤ 1. The penalty is defined as (1-α)/2||β||_2^2+α||β||_1. alpha=1 is the lasso penalty, and alpha=0 the ridge penalty.",
     nlambda = "Argument of: glmnet::glmnet\n\nThe number of lambda values - default is 100.",
     lambda.min.ratio = "Argument of: glmnet::glmnet\n\nSmallest value for lambda, as a fraction of lambda.max, the (data derived) entry value (i.e. the smallest value for which all coefficients are zero). The default depends on the sample size nobs relative to the number of variables nvars. If nobs > nvars, the default is 0.0001, close to zero. If nobs < nvars, the default is 0.01. A very small value of lambda.min.ratio will lead to a saturated fit in the nobs < nvars case. This is undefined for \"binomial\" and \"multinomial\" models, and glmnet will exit gracefully when the percentage deviance explained is almost 1.",
     lambda = "Argument of: glmnet::glmnet\n\nA user supplied lambda sequence. Typical usage is to have the program compute its own lambda sequence based on nlambda and lambda.min.ratio. Supplying a value of lambda overrides this. WARNING: use with care. Avoid supplying a single value for lambda (for predictions after CV use predict() instead). Supply instead a decreasing sequence of lambda values. glmnet relies on its warms starts for speed, and its often faster to fit a whole path than compute a single fit.",
     standardize = "Argument of: glmnet::glmnet\n\nLogical flag for x variable standardization, prior to fitting the model sequence. The coefficients are always returned on the original scale. Default is standardize=TRUE. If variables are in the same units already, you might not wish to standardize. See details below for y standardization with family=\"gaussian\".",
     intercept = "Argument of: glmnet::glmnet\n\nShould intercept(s) be fitted (default=TRUE) or set to zero (FALSE)",
     thresh = "Argument of: glmnet::glmnet\n\nConvergence threshold for coordinate descent. Each inner coordinate-descent loop continues until the maximum change in the objective after any coefficient update is less than thresh times the null deviance. Defaults value is 1E-7.",
     dfmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables in the model. Useful for very large nvars, if a partial path is desired.",
     pmax = "Argument of: glmnet::glmnet\n\nLimit the maximum number of variables ever to be nonzero",
     exclude = "Argument of: glmnet::glmnet\n\nIndices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor (next item).",
     penalty.factor = "Argument of: glmnet::glmnet\n\nSeparate penalty factors can be applied to each coefficient. This is a number that multiplies lambda to allow differential shrinkage. Can be 0 for some variables, which implies no shrinkage, and that variable is always included in the model. Default is 1 for all variables (and implicitly infinity for variables listed in exclude). Note: the penalty factors are internally rescaled to sum to nvars, and the lambda sequence will reflect this change.",
     lower.limits = "Argument of: glmnet::glmnet\n\nVector of lower limits for each coefficient; default -Inf. Each of these must be non-positive. Can be presented as a single value (which will then be replicated), else a vector of length nvars",
     upper.limits = "Argument of: glmnet::glmnet\n\nVector of upper limits for each coefficient; default Inf. See lower.limits",
     maxit = "Argument of: glmnet::glmnet\n\nMaximum number of passes over the data for all lambda values; default is 10^5.",
     type.logistic = "Argument of: glmnet::glmnet\n\nIf \"Newton\" then the exact hessian is used (default), while \"modified.Newton\" uses an upper-bound on the hessian, and can be faster.",
     type.multinomial = "Argument of: glmnet::glmnet\n\nIf \"grouped\" then a grouped lasso penalty is used on the multinomial coefficients for a variable. This ensures they are all in our out together. The default is \"ungrouped\""),
     config = list(), fix.factors.prediction = FALSE), class = c("classif.glmnet",
     "RLearnerClassif", "RLearner", "Learner")), .task = structure(list(type = "classif",
     env = <environment>, weights = NULL, blocking = NULL, coordinates = NULL, task.desc = structure(list(
     id = "y1", type = "classif", target = "y1", size = 150L, n.feat = c(numerics = 4L,
     factors = 1L, ordered = 0L, functionals = 0L), has.missings = FALSE, has.weights = FALSE,
     has.blocking = FALSE, has.coordinates = FALSE, class.levels = c("FALSE",
     "TRUE"), positive = "FALSE", negative = "TRUE", class.distribution = structure(c(`FALSE` = 75L,
     `TRUE` = 75L), .Dim = 2L, .Dimnames = structure(list(c("FALSE", "TRUE")), .Names = ""), class = "table")), class = c("ClassifTaskDesc",
     "SupervisedTaskDesc", "TaskDesc"))), class = c("ClassifTask", "SupervisedTask",
     "Task")), .subset = NULL)
     30: attachTrainingInfo(do.call(glmnet::glmnet, args), info)
     31: do.call(glmnet::glmnet, args)
     32: (function (x, y, family = c("gaussian", "binomial", "poisson", "multinomial", "cox",
     "mgaussian"), weights, offset = NULL, alpha = 1, nlambda = 100, lambda.min.ratio = ifelse(nobs <
     nvars, 0.01, 1e-04), lambda = NULL, standardize = TRUE, intercept = TRUE, thresh = 1e-07,
     dfmax = nvars + 1, pmax = min(dfmax * 2 + 20, nvars), exclude, penalty.factor = rep(1,
     nvars), lower.limits = -Inf, upper.limits = Inf, maxit = 1e+05, type.gaussian = ifelse(nvars <
     500, "covariance", "naive"), type.logistic = c("Newton", "modified.Newton"),
     standardize.response = FALSE, type.multinomial = c("ungrouped", "grouped"))
     {
     family = match.arg(family)
     if (alpha > 1) {
     warning("alpha >1; set to 1")
     alpha = 1
     }
     if (alpha < 0) {
     warning("alpha<0; set to 0")
     alpha = 0
     }
     alpha = as.double(alpha)
     this.call = match.call()
     nlam = as.integer(nlambda)
     y = drop(y)
     np = dim(x)
     if (is.null(np) | (np[2] <= 1))
     stop("x should be a matrix with 2 or more columns")
     nobs = as.integer(np[1])
     if (missing(weights))
     weights = rep(1, nobs)
     else if (length(weights) != nobs)
     stop(paste("number of elements in weights (", length(weights), ") not equal to the number of rows of x (",
     nobs, ")", sep = ""))
     nvars = as.integer(np[2])
     dimy = dim(y)
     nrowy = ifelse(is.null(dimy), length(y), dimy[1])
     if (nrowy != nobs)
     stop(paste("number of observations in y (", nrowy, ") not equal to the number of rows of x (",
     nobs, ")", sep = ""))
     vnames = colnames(x)
     if (is.null(vnames))
     vnames = paste("V", seq(nvars), sep = "")
     ne = as.integer(dfmax)
     nx = as.integer(pmax)
     if (missing(exclude))
     exclude = integer(0)
     if (any(penalty.factor == Inf)) {
     exclude = c(exclude, seq(nvars)[penalty.factor == Inf])
     exclude = sort(unique(exclude))
     }
     if (length(exclude) > 0) {
     jd = match(exclude, seq(nvars), 0)
     if (!all(jd > 0))
     stop("Some excluded variables out of range")
     penalty.factor[jd] = 1
     jd = as.integer(c(length(jd), jd))
     }
     else jd = as.integer(0)
     vp = as.double(penalty.factor)
     internal.parms = glmnet.control()
     if (any(lower.limits > 0)) {
     stop("Lower limits should be non-positive")
     }
     if (any(upper.limits < 0)) {
     stop("Upper limits should be non-negative")
     }
     lower.limits[lower.limits == -Inf] = -internal.parms$big
     upper.limits[upper.limits == Inf] = internal.parms$big
     if (length(lower.limits) < nvars) {
     if (length(lower.limits) == 1)
     lower.limits = rep(lower.limits, nvars)
     else stop("Require length 1 or nvars lower.limits")
     }
     else lower.limits = lower.limits[seq(nvars)]
     if (length(upper.limits) < nvars) {
     if (length(upper.limits) == 1)
     upper.limits = rep(upper.limits, nvars)
     else stop("Require length 1 or nvars upper.limits")
     }
     else upper.limits = upper.limits[seq(nvars)]
     cl = rbind(lower.limits, upper.limits)
     if (any(cl == 0)) {
     fdev = glmnet.control()$fdev
     if (fdev != 0) {
     glmnet.control(fdev = 0)
     on.exit(glmnet.control(fdev = fdev))
     }
     }
     storage.mode(cl) = "double"
     isd = as.integer(standardize)
     intr = as.integer(intercept)
     if (!missing(intercept) && family == "cox")
     warning("Cox model has no intercept")
     jsd = as.integer(standardize.response)
     thresh = as.double(thresh)
     if (is.null(lambda)) {
     if (lambda.min.ratio >= 1)
     stop("lambda.min.ratio should be less than 1")
     flmin = as.double(lambda.min.ratio)
     ulam = double(1)
     }
     else {
     flmin = as.double(1)
     if (any(lambda < 0))
     stop("lambdas should be non-negative")
     ulam = as.double(rev(sort(lambda)))
     nlam = as.integer(length(lambda))
     }
     is.sparse = FALSE
     ix = jx = NULL
     if (inherits(x, "sparseMatrix")) {
     is.sparse = TRUE
     x = as(x, "CsparseMatrix")
     x = as(x, "dgCMatrix")
     ix = as.integer(x@p + 1)
     jx = as.integer(x@i + 1)
     x = as.double(x@x)
     }
     kopt = switch(match.arg(type.logistic), Newton = 0, modified.Newton = 1)
     if (family == "multinomial") {
     type.multinomial = match.arg(type.multinomial)
     if (type.multinomial == "grouped")
     kopt = 2
     }
     kopt = as.integer(kopt)
     fit = switch(family, gaussian = elnet(x, is.sparse, ix, jx, y, weights, offset,
     type.gaussian, alpha, nobs, nvars, jd, vp, cl, ne, nx, nlam, flmin, ulam,
     thresh, isd, intr, vnames, maxit), poisson = fishnet(x, is.sparse, ix, jx,
     y, weights, offset, alpha, nobs, nvars, jd, vp, cl, ne, nx, nlam, flmin,
     ulam, thresh, isd, intr, vnames, maxit), binomial = lognet(x, is.sparse,
     ix, jx, y, weights, offset, alpha, nobs, nvars, jd, vp, cl, ne, nx, nlam,
     flmin, ulam, thresh, isd, intr, vnames, maxit, kopt, family), multinomial = lognet(x,
     is.sparse, ix, jx, y, weights, offset, alpha, nobs, nvars, jd, vp, cl, ne,
     nx, nlam, flmin, ulam, thresh, isd, intr, vnames, maxit, kopt, family), cox = coxnet(x,
     is.sparse, ix, jx, y, weights, offset, alpha, nobs, nvars, jd, vp, cl, ne,
     nx, nlam, flmin, ulam, thresh, isd, vnames, maxit), mgaussian = mrelnet(x,
     is.sparse, ix, jx, y, weights, offset, alpha, nobs, nvars, jd, vp, cl, ne,
     nx, nlam, flmin, ulam, thresh, isd, jsd, intr, vnames, maxit))
     if (is.null(lambda))
     fit$lambda = fix.lam(fit$lambda)
     fit$call = this.call
     fit$nobs = nobs
     class(fit) = c(class(fit), "glmnet")
     fit
     })(x = structure(c(5.1, 4.9, 4.7, 4.6, 5, 5.4, 4.6, 5, 4.4, 4.9, 5.4, 4.8, 4.8, 4.3,
     5.8, 5.7, 5.4, 5.1, 5.7, 5.1, 5.4, 5.1, 4.6, 5.1, 4.8, 5, 5, 5.2, 5.2, 4.7, 4.8,
     5.4, 5.2, 5.5, 4.9, 5, 5.5, 4.9, 4.4, 5.1, 5, 4.5, 4.4, 5, 5.1, 4.8, 5.1, 4.6, 5.3,
     5, 7, 6.4, 6.9, 5.5, 6.5, 5.7, 6.3, 4.9, 6.6, 5.2, 5, 5.9, 6, 6.1, 5.6, 6.7, 5.6,
     5.8, 6.2, 5.6, 5.9, 6.1, 6.3, 6.1, 6.4, 6.6, 6.8, 6.7, 6, 5.7, 5.5, 5.5, 5.8, 6,
     5.4, 6, 6.7, 6.3, 5.6, 5.5, 5.5, 6.1, 5.8, 5, 5.6, 5.7, 5.7, 6.2, 5.1, 5.7, 6.3,
     5.8, 7.1, 6.3, 6.5, 7.6, 4.9, 7.3, 6.7, 7.2, 6.5, 6.4, 6.8, 5.7, 5.8, 6.4, 6.5, 7.7,
     7.7, 6, 6.9, 5.6, 7.7, 6.3, 6.7, 7.2, 6.2, 6.1, 6.4, 7.2, 7.4, 7.9, 6.4, 6.3, 6.1,
     7.7, 6.3, 6.4, 6, 6.9, 6.7, 6.9, 5.8, 6.8, 6.7, 6.7, 6.3, 6.5, 6.2, 5.9, 3.5, 3,
     3.2, 3.1, 3.6, 3.9, 3.4, 3.4, 2.9, 3.1, 3.7, 3.4, 3, 3, 4, 4.4, 3.9, 3.5, 3.8, 3.8,
     3.4, 3.7, 3.6, 3.3, 3.4, 3, 3.4, 3.5, 3.4, 3.2, 3.1, 3.4, 4.1, 4.2, 3.1, 3.2, 3.5,
     3.6, 3, 3.4, 3.5, 2.3, 3.2, 3.5, 3.8, 3, 3.8, 3.2, 3.7, 3.3, 3.2, 3.2, 3.1, 2.3,
     2.8, 2.8, 3.3, 2.4, 2.9, 2.7, 2, 3, 2.2, 2.9, 2.9, 3.1, 3, 2.7, 2.2, 2.5, 3.2, 2.8,
     2.5, 2.8, 2.9, 3, 2.8, 3, 2.9, 2.6, 2.4, 2.4, 2.7, 2.7, 3, 3.4, 3.1, 2.3, 3, 2.5,
     2.6, 3, 2.6, 2.3, 2.7, 3, 2.9, 2.9, 2.5, 2.8, 3.3, 2.7, 3, 2.9, 3, 3, 2.5, 2.9, 2.5,
     3.6, 3.2, 2.7, 3, 2.5, 2.8, 3.2, 3, 3.8, 2.6, 2.2, 3.2, 2.8, 2.8, 2.7, 3.3, 3.2,
     2.8, 3, 2.8, 3, 2.8, 3.8, 2.8, 2.8, 2.6, 3, 3.4, 3.1, 3, 3.1, 3.1, 3.1, 2.7, 3.2,
     3.3, 3, 2.5, 3, 3.4, 3, 1.4, 1.4, 1.3, 1.5, 1.4, 1.7, 1.4, 1.5, 1.4, 1.5, 1.5, 1.6,
     1.4, 1.1, 1.2, 1.5, 1.3, 1.4, 1.7, 1.5, 1.7, 1.5, 1, 1.7, 1.9, 1.6, 1.6, 1.5, 1.4,
     1.6, 1.6, 1.5, 1.5, 1.4, 1.5, 1.2, 1.3, 1.4, 1.3, 1.5, 1.3, 1.3, 1.3, 1.6, 1.9, 1.4,
     1.6, 1.4, 1.5, 1.4, 4.7, 4.5, 4.9, 4, 4.6, 4.5, 4.7, 3.3, 4.6, 3.9, 3.5, 4.2, 4,
     4.7, 3.6, 4.4, 4.5, 4.1, 4.5, 3.9, 4.8, 4, 4.9, 4.7, 4.3, 4.4, 4.8, 5, 4.5, 3.5,
     3.8, 3.7, 3.9, 5.1, 4.5, 4.5, 4.7, 4.4, 4.1, 4, 4.4, 4.6, 4, 3.3, 4.2, 4.2, 4.2,
     4.3, 3, 4.1, 6, 5.1, 5.9, 5.6, 5.8, 6.6, 4.5, 6.3, 5.8, 6.1, 5.1, 5.3, 5.5, 5, 5.1,
     5.3, 5.5, 6.7, 6.9, 5, 5.7, 4.9, 6.7, 4.9, 5.7, 6, 4.8, 4.9, 5.6, 5.8, 6.1, 6.4,
     5.6, 5.1, 5.6, 6.1, 5.6, 5.5, 4.8, 5.4, 5.6, 5.1, 5.1, 5.9, 5.7, 5.2, 5, 5.2, 5.4,
     5.1, 0.2, 0.2, 0.2, 0.2, 0.2, 0.4, 0.3, 0.2, 0.2, 0.1, 0.2, 0.2, 0.1, 0.1, 0.2, 0.4,
     0.4, 0.3, 0.3, 0.3, 0.2, 0.4, 0.2, 0.5, 0.2, 0.2, 0.4, 0.2, 0.2, 0.2, 0.2, 0.4, 0.1,
     0.2, 0.2, 0.2, 0.2, 0.1, 0.2, 0.2, 0.3, 0.3, 0.2, 0.6, 0.4, 0.3, 0.2, 0.2, 0.2, 0.2,
     1.4, 1.5, 1.5, 1.3, 1.5, 1.3, 1.6, 1, 1.3, 1.4, 1, 1.5, 1, 1.4, 1.3, 1.4, 1.5, 1,
     1.5, 1.1, 1.8, 1.3, 1.5, 1.2, 1.3, 1.4, 1.4, 1.7, 1.5, 1, 1.1, 1, 1.2, 1.6, 1.5,
     1.6, 1.5, 1.3, 1.3, 1.3, 1.2, 1.4, 1.2, 1, 1.3, 1.2, 1.3, 1.3, 1.1, 1.3, 2.5, 1.9,
     2.1, 1.8, 2.2, 2.1, 1.7, 1.8, 1.8, 2.5, 2, 1.9, 2.1, 2, 2.4, 2.3, 1.8, 2.2, 2.3,
     1.5, 2.3, 2, 2, 1.8, 2.1, 1.8, 1.8, 1.8, 2.1, 1.6, 1.9, 2, 2.2, 1.5, 1.4, 2.3, 2.4,
     1.8, 1.8, 2.1, 2.4, 2.3, 1.9, 2.3, 2.5, 2.3, 1.9, 2, 2.3, 1.8, 1, 1, 1, 1, 1, 1,
     1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
     1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
     0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
     0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
     0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
     0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
     0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
     0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
     1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
     1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
     0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
     0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
     0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
     0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
     0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
     1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
     1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1), .Dim = c(150L, 7L), .Dimnames = list(NULL, c("Sepal.Length",
     "Sepal.Width", "Petal.Length", "Petal.Width", "Species.setosa", "Species.versicolor",
     "Species.virginica"))), y = structure(c(2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L,
     1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L,
     2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L,
     1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L,
     2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L,
     1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L,
     2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L,
     1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L), .Label = c("FALSE", "TRUE"), class = "factor"),
     family = "binomial")
     33: lognet(x, is.sparse, ix, jx, y, weights, offset, alpha, nobs, nvars, jd, vp, cl,
     ne, nx, nlam, flmin, ulam, thresh, isd, intr, vnames, maxit, kopt, family)
     34: table(y)
    
     ── 18. Error: tuning allows usage of budget (@test_base_tuning.R#120) ─────────
     Assertion on 'discrete.names' failed: Must be of type 'logical flag', not 'NULL'.
     1: tuneParams(lrn, binaryclass.task, resampling = rdesc, par.set = ps, control = ctrl) at testthat/test_base_tuning.R:120
     2: sel.func(learner, task, resampling, measures, par.set, control, opt.path, show.info,
     resample.fun)
     3: sampleValue(par.set, start, trafo = FALSE)
     4: sampleValue.ParamSet(par.set, start, trafo = FALSE)
     5: lapply(par$pars, sampleValue, discrete.names = discrete.names, trafo = trafo)
     6: FUN(X[[i]], ...)
     7: sampleValue.Param(X[[i]], ...)
     8: assertFlag(discrete.names)
     9: makeAssertion(x, res, .var.name, add)
     10: mstop("Assertion on '%s' failed: %s.", var.name, res)
    
     ══ testthat results ═══════════════════════════════════════════════════════════
     OK: 3320 SKIPPED: 0 FAILED: 18
     1. Error: BaggingWrapper with glmnet (#958) (@test_base_BaggingWrapper.R#71)
     2. Error: MulticlassWrapper (@test_base_MulticlassWrapper.R#23)
     3. Error: PreprocWrapper with glmnet (#958) (@test_base_PreprocWrapper.R#47)
     4. Error: TuneWrapper passed predict hyper pars correctly to base learner (@test_base_TuneWrapper.R#51)
     5. Error: TuneWrapper with glmnet (#958) (@test_base_TuneWrapper.R#118)
     6. Error: clustering performance (@test_base_clustering.R#15)
     7. Error: clustering performance with missing clusters (@test_base_clustering.R#27)
     8. Error: clustering resample (@test_base_clustering.R#41)
     9. Error: clustering benchmark (@test_base_clustering.R#52)
     1. ...
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-patched-solaris-x86

Version: 2.12.1
Check: tests
Result: ERROR
     Running ‘run-base.R’ [470s/482s]
    Running the tests in ‘tests/run-base.R’ failed.
    Complete output:
     > library(testthat)
     > test_check("mlr", filter = "base_")
     Loading required package: mlr
     Loading required package: ParamHelpers
    
     Attaching package: 'rex'
    
     The following object is masked from 'package:testthat':
    
     matches
    
    
     Attaching package: 'BBmisc'
    
     The following object is masked from 'package:base':
    
     isFALSE
    
     ── 1. Error: MulticlassWrapper (@test_base_MulticlassWrapper.R#23) ────────────
     For learner classif.lqa please install the following packages: lqa
     1: makeLearner("classif.lqa") at testthat/test_base_MulticlassWrapper.R:23
     2: do.call(constructor, list())
     3: (function ()
     {
     makeRLearnerClassif(cl = "classif.lqa", package = "lqa", par.set = makeParamSet(makeDiscreteLearnerParam(id = "penalty",
     values = c("adaptive.lasso", "ao", "bridge", "enet", "fused.lasso", "genet",
     "icb", "lasso", "licb", "oscar", "penalreg", "ridge", "scad", "weighted.fusion")),
     makeNumericLearnerParam(id = "lambda", lower = 0, requires = quote(penalty %in%
     c("adaptive.lasso", "ao", "bridge", "genet", "lasso", "oscar", "penalreg",
     "ridge", "scad"))), makeNumericLearnerParam(id = "gamma", lower = 1 +
     .Machine$double.eps, requires = quote(penalty %in% c("ao", "bridge",
     "genet", "weighted.fusion"))), makeNumericLearnerParam(id = "alpha",
     lower = 0, upper = 1, requires = quote(penalty == "genet")), makeNumericLearnerParam(id = "oscar.c",
     lower = 0, requires = quote(penalty == "oscar")), makeNumericLearnerParam(id = "a",
     lower = 2 + .Machine$double.eps, requires = quote(penalty == "scad")),
     makeNumericLearnerParam(id = "lambda1", lower = 0, requires = quote(penalty %in%
     c("enet", "fused.lasso", "icb", "licb", "weighted.fusion"))), makeNumericLearnerParam(id = "lambda2",
     lower = 0, requires = quote(penalty %in% c("enet", "fused.lasso", "icb",
     "licb", "weighted.fusion"))), makeDiscreteLearnerParam(id = "method",
     default = "lqa.update2", values = c("lqa.update2", "ForwardBoost", "GBlockBoost")),
     makeNumericLearnerParam(id = "var.eps", default = .Machine$double.eps, lower = 0),
     makeIntegerLearnerParam(id = "max.steps", lower = 1L, default = 5000L), makeNumericLearnerParam(id = "conv.eps",
     default = 0.001, lower = 0), makeLogicalLearnerParam(id = "conv.stop",
     default = TRUE), makeNumericLearnerParam(id = "c1", default = 1e-08,
     lower = 0), makeIntegerLearnerParam(id = "digits", default = 5L, lower = 1L)),
     properties = c("numerics", "prob", "twoclass"), par.vals = list(penalty = "lasso",
     lambda = 0.1), name = "Fitting penalized Generalized Linear Models with the LQA algorithm",
     short.name = "lqa", note = "`penalty` has been set to `\"lasso\"` and `lambda` to `0.1` by default. The parameters `lambda`, `gamma`, `alpha`, `oscar.c`, `a`, `lambda1` and `lambda2` are the tuning parameters of the `penalty` function being used, and correspond to the parameters as named in the respective help files. Parameter `c` for penalty method `oscar` has been named `oscar.c`. Parameters `lambda1` and `lambda2` correspond to the parameters named 'lambda_1' and 'lambda_2' of the penalty functions `enet`, `fused.lasso`, `icb`, `licb`, as well as `weighted.fusion`.",
     callees = c("lqa", "lqa.control", "adaptive.lasso", "ao", "bridge", "enet",
     "fused.lasso", "genet", "icb", "lasso", "licb", "oscar", "penalreg",
     "ridge", "scad", "weighted.fusion"))
     })()
     4: makeRLearnerClassif(cl = "classif.lqa", package = "lqa", par.set = makeParamSet(makeDiscreteLearnerParam(id = "penalty",
     values = c("adaptive.lasso", "ao", "bridge", "enet", "fused.lasso", "genet",
     "icb", "lasso", "licb", "oscar", "penalreg", "ridge", "scad", "weighted.fusion")),
     makeNumericLearnerParam(id = "lambda", lower = 0, requires = quote(penalty %in%
     c("adaptive.lasso", "ao", "bridge", "genet", "lasso", "oscar", "penalreg",
     "ridge", "scad"))), makeNumericLearnerParam(id = "gamma", lower = 1 +
     .Machine$double.eps, requires = quote(penalty %in% c("ao", "bridge", "genet",
     "weighted.fusion"))), makeNumericLearnerParam(id = "alpha", lower = 0, upper = 1,
     requires = quote(penalty == "genet")), makeNumericLearnerParam(id = "oscar.c",
     lower = 0, requires = quote(penalty == "oscar")), makeNumericLearnerParam(id = "a",
     lower = 2 + .Machine$double.eps, requires = quote(penalty == "scad")), makeNumericLearnerParam(id = "lambda1",
     lower = 0, requires = quote(penalty %in% c("enet", "fused.lasso", "icb",
     "licb", "weighted.fusion"))), makeNumericLearnerParam(id = "lambda2",
     lower = 0, requires = quote(penalty %in% c("enet", "fused.lasso", "icb",
     "licb", "weighted.fusion"))), makeDiscreteLearnerParam(id = "method",
     default = "lqa.update2", values = c("lqa.update2", "ForwardBoost", "GBlockBoost")),
     makeNumericLearnerParam(id = "var.eps", default = .Machine$double.eps, lower = 0),
     makeIntegerLearnerParam(id = "max.steps", lower = 1L, default = 5000L), makeNumericLearnerParam(id = "conv.eps",
     default = 0.001, lower = 0), makeLogicalLearnerParam(id = "conv.stop", default = TRUE),
     makeNumericLearnerParam(id = "c1", default = 1e-08, lower = 0), makeIntegerLearnerParam(id = "digits",
     default = 5L, lower = 1L)), properties = c("numerics", "prob", "twoclass"),
     par.vals = list(penalty = "lasso", lambda = 0.1), name = "Fitting penalized Generalized Linear Models with the LQA algorithm",
     short.name = "lqa", note = "`penalty` has been set to `\"lasso\"` and `lambda` to `0.1` by default. The parameters `lambda`, `gamma`, `alpha`, `oscar.c`, `a`, `lambda1` and `lambda2` are the tuning parameters of the `penalty` function being used, and correspond to the parameters as named in the respective help files. Parameter `c` for penalty method `oscar` has been named `oscar.c`. Parameters `lambda1` and `lambda2` correspond to the parameters named 'lambda_1' and 'lambda_2' of the penalty functions `enet`, `fused.lasso`, `icb`, `licb`, as well as `weighted.fusion`.",
     callees = c("lqa", "lqa.control", "adaptive.lasso", "ao", "bridge", "enet", "fused.lasso",
     "genet", "icb", "lasso", "licb", "oscar", "penalreg", "ridge", "scad", "weighted.fusion"))
     5: addClasses(makeRLearnerInternal(cl, "classif", package, par.set, par.vals, properties,
     name, short.name, note, callees), c(cl, "RLearnerClassif"))
     6: makeRLearnerInternal(cl, "classif", package, par.set, par.vals, properties, name,
     short.name, note, callees)
     7: requirePackages(package, why = stri_paste("learner", id, sep = " "), default.method = "load")
     8: stopf("For %s please install the following packages: %s", why, ps)
    
     ── 2. Error: tuning allows usage of budget (@test_base_tuning.R#120) ──────────
     Assertion on 'discrete.names' failed: Must be of type 'logical flag', not 'NULL'.
     1: tuneParams(lrn, binaryclass.task, resampling = rdesc, par.set = ps, control = ctrl) at testthat/test_base_tuning.R:120
     2: sel.func(learner, task, resampling, measures, par.set, control, opt.path, show.info,
     resample.fun)
     3: sampleValue(par.set, start, trafo = FALSE)
     4: sampleValue.ParamSet(par.set, start, trafo = FALSE)
     5: lapply(par$pars, sampleValue, discrete.names = discrete.names, trafo = trafo)
     6: FUN(X[[i]], ...)
     7: sampleValue.Param(X[[i]], ...)
     8: assertFlag(discrete.names)
     9: makeAssertion(x, res, .var.name, add)
     10: mstop("Assertion on '%s' failed: %s.", var.name, res)
    
     ══ testthat results ═══════════════════════════════════════════════════════════
     OK: 3578 SKIPPED: 0 FAILED: 2
     1. Error: MulticlassWrapper (@test_base_MulticlassWrapper.R#23)
     2. Error: tuning allows usage of budget (@test_base_tuning.R#120)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-release-linux-x86_64

Version: 2.12.1
Check: running tests for arch ‘i386’
Result: ERROR
     Running 'run-base.R' [571s]
     Running 'run-basenocran.R' [0s]
     Running 'run-classif1.R' [0s]
     Running 'run-classif2.R' [1s]
     Running 'run-cluster.R' [1s]
     Running 'run-featsel.R' [1s]
     Running 'run-learners-classif.R' [1s]
     Running 'run-learners-classiflabelswitch.R' [0s]
     Running 'run-learners-cluster.R' [0s]
     Running 'run-learners-general.R' [0s]
     Running 'run-learners-multilabel.R' [0s]
     Running 'run-learners-regr.R' [1s]
     Running 'run-learners-surv.R' [1s]
     Running 'run-lint.R' [5s]
     Running 'run-multilabel.R' [0s]
     Running 'run-parallel.R' [0s]
     Running 'run-regr.R' [1s]
     Running 'run-stack.R' [1s]
     Running 'run-surv.R' [0s]
     Running 'run-tune.R' [1s]
    Running the tests in 'tests/run-base.R' failed.
    Complete output:
     > library(testthat)
     > test_check("mlr", filter = "base_")
     Loading required package: mlr
     Loading required package: ParamHelpers
    
     Attaching package: 'rex'
    
     The following object is masked from 'package:testthat':
    
     matches
    
    
     Attaching package: 'BBmisc'
    
     The following object is masked from 'package:base':
    
     isFALSE
    
     -- 1. Error: tuning allows usage of budget (@test_base_tuning.R#120) ----------
     Assertion on 'discrete.names' failed: Must be of type 'logical flag', not 'NULL'.
     1: tuneParams(lrn, binaryclass.task, resampling = rdesc, par.set = ps, control = ctrl) at testthat/test_base_tuning.R:120
     2: sel.func(learner, task, resampling, measures, par.set, control, opt.path, show.info,
     resample.fun)
     3: sampleValue(par.set, start, trafo = FALSE)
     4: sampleValue.ParamSet(par.set, start, trafo = FALSE)
     5: lapply(par$pars, sampleValue, discrete.names = discrete.names, trafo = trafo)
     6: FUN(X[[i]], ...)
     7: sampleValue.Param(X[[i]], ...)
     8: assertFlag(discrete.names)
     9: makeAssertion(x, res, .var.name, add)
     10: mstop("Assertion on '%s' failed: %s.", var.name, res)
    
     == testthat results ===========================================================
     OK: 3581 SKIPPED: 0 FAILED: 1
     1. Error: tuning allows usage of budget (@test_base_tuning.R#120)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-release-windows-ix86+x86_64

Version: 2.12.1
Check: running tests for arch ‘x64’
Result: ERROR
     Running 'run-base.R' [551s]
     Running 'run-basenocran.R' [0s]
     Running 'run-classif1.R' [1s]
     Running 'run-classif2.R' [1s]
     Running 'run-cluster.R' [0s]
     Running 'run-featsel.R' [0s]
     Running 'run-learners-classif.R' [0s]
     Running 'run-learners-classiflabelswitch.R' [0s]
     Running 'run-learners-cluster.R' [1s]
     Running 'run-learners-general.R' [1s]
     Running 'run-learners-multilabel.R' [1s]
     Running 'run-learners-regr.R' [0s]
     Running 'run-learners-surv.R' [1s]
     Running 'run-lint.R' [5s]
     Running 'run-multilabel.R' [0s]
     Running 'run-parallel.R' [0s]
     Running 'run-regr.R' [0s]
     Running 'run-stack.R' [0s]
     Running 'run-surv.R' [1s]
     Running 'run-tune.R' [1s]
    Running the tests in 'tests/run-base.R' failed.
    Complete output:
     > library(testthat)
     > test_check("mlr", filter = "base_")
     Loading required package: mlr
     Loading required package: ParamHelpers
    
     Attaching package: 'rex'
    
     The following object is masked from 'package:testthat':
    
     matches
    
    
     Attaching package: 'BBmisc'
    
     The following object is masked from 'package:base':
    
     isFALSE
    
     -- 1. Error: tuning allows usage of budget (@test_base_tuning.R#120) ----------
     Assertion on 'discrete.names' failed: Must be of type 'logical flag', not 'NULL'.
     1: tuneParams(lrn, binaryclass.task, resampling = rdesc, par.set = ps, control = ctrl) at testthat/test_base_tuning.R:120
     2: sel.func(learner, task, resampling, measures, par.set, control, opt.path, show.info,
     resample.fun)
     3: sampleValue(par.set, start, trafo = FALSE)
     4: sampleValue.ParamSet(par.set, start, trafo = FALSE)
     5: lapply(par$pars, sampleValue, discrete.names = discrete.names, trafo = trafo)
     6: FUN(X[[i]], ...)
     7: sampleValue.Param(X[[i]], ...)
     8: assertFlag(discrete.names)
     9: makeAssertion(x, res, .var.name, add)
     10: mstop("Assertion on '%s' failed: %s.", var.name, res)
    
     == testthat results ===========================================================
     OK: 3581 SKIPPED: 0 FAILED: 1
     1. Error: tuning allows usage of budget (@test_base_tuning.R#120)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-release-windows-ix86+x86_64

Version: 2.12.1
Check: running tests for arch ‘i386’
Result: ERROR
     Running 'run-base.R' [494s]
    Running the tests in 'tests/run-base.R' failed.
    Complete output:
     > library(testthat)
     > test_check("mlr", filter = "base_")
     Loading required package: mlr
     Loading required package: ParamHelpers
    
     Attaching package: 'rex'
    
     The following object is masked from 'package:testthat':
    
     matches
    
     -- 1. Error: tuning allows usage of budget (@test_base_tuning.R#120) ----------
     Assertion on 'discrete.names' failed: Must be of type 'logical flag', not 'NULL'.
     1: tuneParams(lrn, binaryclass.task, resampling = rdesc, par.set = ps, control = ctrl) at testthat/test_base_tuning.R:120
     2: sel.func(learner, task, resampling, measures, par.set, control, opt.path, show.info,
     resample.fun)
     3: sampleValue(par.set, start, trafo = FALSE)
     4: sampleValue.ParamSet(par.set, start, trafo = FALSE)
     5: lapply(par$pars, sampleValue, discrete.names = discrete.names, trafo = trafo)
     6: FUN(X[[i]], ...)
     7: sampleValue.Param(X[[i]], ...)
     8: assertFlag(discrete.names)
     9: makeAssertion(x, res, .var.name, add)
     10: mstop("Assertion on '%s' failed: %s.", var.name, res)
    
     == testthat results ===========================================================
     OK: 3581 SKIPPED: 0 FAILED: 1
     1. Error: tuning allows usage of budget (@test_base_tuning.R#120)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-oldrel-windows-ix86+x86_64

Version: 2.12.1
Check: running tests for arch ‘x64’
Result: ERROR
     Running 'run-base.R' [517s]
    Running the tests in 'tests/run-base.R' failed.
    Complete output:
     > library(testthat)
     > test_check("mlr", filter = "base_")
     Loading required package: mlr
     Loading required package: ParamHelpers
    
     Attaching package: 'rex'
    
     The following object is masked from 'package:testthat':
    
     matches
    
     -- 1. Error: tuning allows usage of budget (@test_base_tuning.R#120) ----------
     Assertion on 'discrete.names' failed: Must be of type 'logical flag', not 'NULL'.
     1: tuneParams(lrn, binaryclass.task, resampling = rdesc, par.set = ps, control = ctrl) at testthat/test_base_tuning.R:120
     2: sel.func(learner, task, resampling, measures, par.set, control, opt.path, show.info,
     resample.fun)
     3: sampleValue(par.set, start, trafo = FALSE)
     4: sampleValue.ParamSet(par.set, start, trafo = FALSE)
     5: lapply(par$pars, sampleValue, discrete.names = discrete.names, trafo = trafo)
     6: FUN(X[[i]], ...)
     7: sampleValue.Param(X[[i]], ...)
     8: assertFlag(discrete.names)
     9: makeAssertion(x, res, .var.name, add)
     10: mstop("Assertion on '%s' failed: %s.", var.name, res)
    
     == testthat results ===========================================================
     OK: 3581 SKIPPED: 0 FAILED: 1
     1. Error: tuning allows usage of budget (@test_base_tuning.R#120)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-oldrel-windows-ix86+x86_64

Version: 2.12.1
Check: tests
Result: ERROR
     Running ‘run-base.R’ [362s/266s]
    Running the tests in ‘tests/run-base.R’ failed.
    Last 13 lines of output:
     1. Error: plotFilterValues (@test_base_generateFilterValuesData.R#68)
     2. Error: args are passed down to filter methods (@test_base_generateFilterValuesData.R#97)
     3. Error: 2 hyperparams (@test_base_generateHyperParsEffect.R#174)
     4. Error: 2 hyperparams nested (@test_base_generateHyperParsEffect.R#230)
     5. Error: getHyperPars (@test_base_getHyperPars.R#19)
     6. Error: WeightedClassesWrapper, binary (@test_base_imbal_weightedclasses.R#16)
     7. Error: WeightedClassesWrapper, multiclass (@test_base_imbal_weightedclasses.R#41)
     8. Error: getClassWeightParam (@test_base_imbal_weightedclasses.R#72)
     9. Error: listLearners (@test_base_listLearners.R#18)
     1. ...
    
     Error: testthat unit tests failed
     In addition: Warning message:
     replacing previous import 'BBmisc::isFALSE' by 'backports::isFALSE' when loading 'ParamHelpers'
     Execution halted
Flavor: r-oldrel-osx-x86_64