CRAN Package Check Results for Package contextual

Last updated on 2019-03-19 07:46:55 CET.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 0.9.8 33.09 181.16 214.25 ERROR
r-devel-linux-x86_64-debian-gcc 0.9.8.1 26.73 172.11 198.84 OK
r-devel-linux-x86_64-fedora-clang 0.9.8.1 212.14 OK
r-devel-linux-x86_64-fedora-gcc 0.9.8.1 205.38 OK
r-devel-windows-ix86+x86_64 0.9.8.1 56.00 176.00 232.00 OK
r-patched-linux-x86_64 0.9.8.1 30.23 161.91 192.14 OK
r-patched-solaris-x86 0.9.8.1 225.20 NOTE
r-release-linux-x86_64 0.9.8.1 28.11 169.82 197.93 OK
r-release-windows-ix86+x86_64 0.9.8 37.00 167.00 204.00 OK
r-release-osx-x86_64 0.9.8 OK
r-oldrel-windows-ix86+x86_64 0.9.8 11.00 207.00 218.00 OK
r-oldrel-osx-x86_64 0.9.8.1 ERROR

Check Details

Version: 0.9.8
Check: whether package can be installed
Result: WARN
    Found the following significant warnings:
     Warning: unable to re-encode 'bandit_offline_replay_evaluator.R' line 125
Flavor: r-devel-linux-x86_64-debian-clang

Version: 0.9.8
Check: tests
Result: ERROR
     Running 'testthat.R' [81s/41s]
    Running the tests in 'tests/testthat.R' failed.
    Complete output:
     > Sys.setenv("R_TESTS" = "")
     >
     > library(testthat)
     > library(contextual)
     >
     > test_check("contextual")
     -- 1. Failure: Agent (@test_agent.R#28) ---------------------------------------
     history$cumulative$testme$reward not identical to 0.4.
     1/1 mismatches
     [1] 0.5 - 0.4 == 0.1
    
     -- 2. Failure: ContextualLinearBandit, binary_rewards = FALSE (@test_bandits.R#
     history$cumulative$LinUCBDisjointOptimized$cum_regret not equal to 4.86.
     1/1 mismatches
     [1] 5.66 - 4.86 == 0.795
    
     -- 3. Failure: ContextualLinearBandit, binary_rewards = TRUE (@test_bandits.R#13
     history$cumulative$LinUCBDisjoint$cum_regret not equal to 6.4.
     1/1 mismatches
     [1] 7.2 - 6.4 == 0.8
    
     -- 4. Failure: ContextualWheelBandit (@test_bandits.R#162) --------------------
     history$cumulative$LinUCBDisjointOptimized$cum_regret not equal to 45.6.
     1/1 mismatches
     [1] 35.7 - 45.6 == -9.91
    
     -- 5. Failure: ContextualWheelBandit (@test_bandits.R#163) --------------------
     history$cumulative$UCB1$cum_regret not equal to 35.8.
     1/1 mismatches
     [1] 45.5 - 35.8 == 9.7
    
     -- 6. Failure: BasicGaussianBandit (@test_bandits.R#178) ----------------------
     history$cumulative$EpsilonGreedy$cum_regret not equal to 2.09.
     1/1 mismatches
     [1] 7.18 - 2.09 == 5.09
    
     -- 7. Failure: BasicBernoulliBandit MAB policies (@test_bandits.R#204) --------
     history$cumulative$Random$cum_regret not equal to 5.3.
     1/1 mismatches
     [1] 5.4 - 5.3 == 0.1
    
     -- 8. Failure: BasicBernoulliBandit MAB policies (@test_bandits.R#205) --------
     history$cumulative$GittinsBrezziLai$cum_regret not equal to 1.
     1/1 mismatches
     [1] 1.3 - 1 == 0.3
    
     -- 9. Failure: BasicBernoulliBandit MAB policies (@test_bandits.R#207) --------
     history$cumulative$UCB1$cum_regret not equal to 3.4.
     1/1 mismatches
     [1] 3.5 - 3.4 == 0.1
    
     -- 10. Failure: BasicBernoulliBandit MAB policies (@test_bandits.R#209) -------
     history$cumulative$EpsilonGreedy$cum_regret not equal to 2.8.
     1/1 mismatches
     [1] 2.3 - 2.8 == -0.5
    
     -- 11. Failure: BasicBernoulliBandit MAB policies (@test_bandits.R#210) -------
     history$cumulative$EpsilonFirst$cum_regret not equal to 4.3.
     1/1 mismatches
     [1] 3.2 - 4.3 == -1.1
    
     -- 12. Failure: BasicBernoulliBandit MAB policies (@test_bandits.R#212) -------
     history$cumulative$BootstrapTS$cum_regret not equal to 2.8.
     1/1 mismatches
     [1] 2.7 - 2.8 == -0.1
    
     -- 13. Failure: BasicBernoulliBandit Long (@test_bandits.R#231) ---------------
     history$cumulative$GittinsBrezziLai$cum_regret not equal to 3.
     1/1 mismatches
     [1] 0 - 3 == -3
    
     -- 14. Failure: ContextualPrecachingBandit MAB policies (@test_bandits.R#258) -
     history$cumulative$Random$cum_regret not equal to 3.4.
     1/1 mismatches
     [1] 2.3 - 3.4 == -1.1
    
     -- 15. Failure: ContextualPrecachingBandit MAB policies (@test_bandits.R#259) -
     history$cumulative$Oracle$cum_regret not equal to 0.9.
     1/1 mismatches
     [1] 0.7 - 0.9 == -0.2
    
     -- 16. Failure: ContextualPrecachingBandit MAB policies (@test_bandits.R#260) -
     history$cumulative$GittinsBrezziLai$cum_regret not equal to 3.1.
     1/1 mismatches
     [1] 2.1 - 3.1 == -1
    
     -- 17. Failure: ContextualPrecachingBandit MAB policies (@test_bandits.R#261) -
     history$cumulative$Exp3$cum_regret not equal to 3.5.
     1/1 mismatches
     [1] 3.3 - 3.5 == -0.2
    
     -- 18. Failure: ContextualPrecachingBandit MAB policies (@test_bandits.R#262) -
     history$cumulative$UCB1$cum_regret not equal to 3.4.
     1/1 mismatches
     [1] 2.4 - 3.4 == -1
    
     -- 19. Failure: ContextualPrecachingBandit MAB policies (@test_bandits.R#263) -
     history$cumulative$ThompsonSampling$cum_regret not equal to 3.
     1/1 mismatches
     [1] 2.4 - 3 == -0.6
    
     -- 20. Failure: ContextualPrecachingBandit MAB policies (@test_bandits.R#264) -
     history$cumulative$EpsilonGreedy$cum_regret not equal to 3.3.
     1/1 mismatches
     [1] 1.9 - 3.3 == -1.4
    
     -- 21. Failure: ContextualPrecachingBandit MAB policies (@test_bandits.R#265) -
     history$cumulative$EpsilonFirst$cum_regret not equal to 3.5.
     1/1 mismatches
     [1] 2 - 3.5 == -1.5
    
     -- 22. Failure: ContextualPrecachingBandit MAB policies (@test_bandits.R#266) -
     history$cumulative$Softmax$cum_regret not equal to 3.
     1/1 mismatches
     [1] 2.4 - 3 == -0.6
    
     -- 23. Failure: ContextualPrecachingBandit MAB policies (@test_bandits.R#267) -
     history$cumulative$BootstrapTS$cum_regret not equal to 2.7.
     1/1 mismatches
     [1] 2.6 - 2.7 == -0.1
    
     -- 24. Failure: ContextualHybridBandit (@test_bandits.R#331) ------------------
     history$cumulative$EpsilonGreedy$reward not equal to 0.6.
     1/1 mismatches
     [1] 0.7 - 0.6 == 0.1
    
     -- 25. Failure: ContextualHybridBandit (@test_bandits.R#351) ------------------
     history$cumulative$ContextualEpochGreedy$cum_reward not equal to 77.
     1/1 mismatches
     [1] 76 - 77 == -1
    
     -- 26. Failure: ContextualBernoulliBandit (@test_bandits.R#374) ---------------
     history$cumulative$LogitBTS$cum_reward not equal to 11.2.
     1/1 mismatches
     [1] 9.2 - 11.2 == -2
    
     -- 27. Failure: ContextualBernoulliBandit (@test_bandits.R#397) ---------------
     history$cumulative$EGreedy$cum_reward not equal to 8.1.
     1/1 mismatches
     [1] 6.4 - 8.1 == -1.7
    
     -- 28. Failure: ContextualBernoulliBandit (@test_bandits.R#398) ---------------
     history$cumulative$cEGreedy$cum_reward not equal to 8.5.
     1/1 mismatches
     [1] 6.9 - 8.5 == -1.6
    
     -- 29. Failure: ContextualBernoulliBandit (@test_bandits.R#400) ---------------
     history$cumulative$LinUCB$cum_reward not equal to 6.4.
     1/1 mismatches
     [1] 8.2 - 6.4 == 1.8
    
     -- 30. Failure: BasicBernoulliBandit MAB policies (@test_bandits.R#437) -------
     direct$get_cumulative_result(t = 20)$LinUCBDisjoint$cum_reward not equal to 8.
     1/1 mismatches
     [1] 4 - 8 == -4
    
     -- 31. Failure: BasicBernoulliBandit MAB policies (@test_bandits.R#438) -------
     direct$get_cumulative_result(t = 20)$EpsilonGreedy$cum_reward not equal to 4.
     1/1 mismatches
     [1] 3 - 4 == -1
    
     -- 32. Failure: BasicBernoulliBandit MAB policies (@test_bandits.R#439) -------
     direct$get_cumulative_result(t = 20)$Oracle$cum_reward not equal to 6.
     1/1 mismatches
     [1] 3 - 6 == -3
    
     -- 33. Failure: BasicBernoulliBandit MAB policies (@test_bandits.R#470) -------
     before$get_cumulative_result(t = 40)$Random$cum_reward not equal to 17.
     1/1 mismatches
     [1] 13 - 17 == -4
    
     -- 34. Failure: BasicBernoulliBandit MAB policies (@test_bandits.R#499) -------
     after$get_cumulative_result(t = 20)$LinUCBDisjoint$cum_reward not equal to 9.
     1/1 mismatches
     [1] 6 - 9 == -3
    
     -- 35. Failure: BasicBernoulliBandit MAB policies (@test_bandits.R#500) -------
     after$get_cumulative_result(t = 20)$EpsilonGreedy$cum_reward not equal to 10.
     1/1 mismatches
     [1] 8 - 10 == -2
    
     -- 36. Failure: BasicBernoulliBandit MAB policies (@test_bandits.R#501) -------
     after$get_cumulative_result(t = 20)$Oracle$cum_reward not equal to 5.
     1/1 mismatches
     [1] 7 - 5 == 2
    
     -- 37. Failure: BasicBernoulliBandit MAB policies (@test_bandits.R#503) -------
     after$get_cumulative_result(t = 21)$LinUCBDisjoint$cum_reward not equal to 10.
     1/1 mismatches
     [1] 6 - 10 == -4
    
     -- 38. Failure: BasicBernoulliBandit MAB policies (@test_bandits.R#504) -------
     after$get_cumulative_result(t = 21)$EpsilonGreedy$cum_reward not equal to 10.
     1/1 mismatches
     [1] 8 - 10 == -2
    
     -- 39. Failure: BasicBernoulliBandit MAB policies (@test_bandits.R#505) -------
     after$get_cumulative_result(t = 21)$Oracle$cum_reward not equal to 5.
     1/1 mismatches
     [1] 7 - 5 == 2
    
     -- 40. Failure: BasicBernoulliBandit MAB policies (@test_bandits.R#538) -------
     after$get_cumulative_result(t = 20)$LinUCBDisjoint$cum_reward not equal to 9.
     1/1 mismatches
     [1] 6 - 9 == -3
    
     -- 41. Failure: BasicBernoulliBandit MAB policies (@test_bandits.R#539) -------
     after$get_cumulative_result(t = 20)$EpsilonGreedy$cum_reward not equal to 10.
     1/1 mismatches
     [1] 8 - 10 == -2
    
     -- 42. Failure: BasicBernoulliBandit MAB policies (@test_bandits.R#540) -------
     after$get_cumulative_result(t = 20)$Oracle$cum_reward not equal to 5.
     1/1 mismatches
     [1] 7 - 5 == 2
    
     -- 43. Failure: PropensityWeightingBandit (@test_bandits.R#599) ---------------
     `a` not equal to 0.528.
     1/1 mismatches
     [1] 0.479 - 0.528 == -0.0485
    
     -- 44. Failure: PropensityWeightingBandit (@test_bandits.R#600) ---------------
     `b` not equal to 0.542.
     1/1 mismatches
     [1] 0.589 - 0.542 == 0.0472
    
     -- 45. Failure: PropensityWeightingBandit (@test_bandits.R#615) ---------------
     `d` not equal to 0.415.
     1/1 mismatches
     [1] 0.537 - 0.415 == 0.122
    
     -- 46. Failure: History summary and print (@test_history.R#18) ----------------
     capture.output(summary(history)) has changed from known value recorded in 'summary_history.rds'.
     13/37 mismatches
     x[9]: " Random 3 3 2.0000000 1.0000000 1.0000000"
     y[9]: " Random 3 3 0.6666667 0.3333333 0.5773503"
    
     x[10]: " Oracle 3 3 0.0000000 0.0000000 0.0000000"
     y[10]: " Oracle 3 3 0.6666667 1.3333333 1.1547005"
    
     x[11]: " ThompsonSampling 3 3 2.6666667 0.3333333 0.5773503"
     y[11]: " ThompsonSampling 3 3 1.3333333 2.3333333 1.5275252"
    
     x[12]: " Exp3 3 3 2.0000000 1.0000000 1.0000000"
     y[12]: " Exp3 3 3 1.0000000 1.0000000 1.0000000"
    
     x[14]: " UCB1 3 3 2.0000000 0.0000000 0.0000000"
     y[14]: " UCB1 3 3 1.0000000 1.0000000 1.0000000"
    
     -- 47. Failure: History save_csv without filename (@test_history.R#28) --------
     `csv_comparison_file` not equal to `import_context`.
     Component "choice": Mean relative difference: 0.7424242
     Component "reward": Mean relative difference: 4.5
     Component "optimal_arm": Mean relative difference: 0.6373626
     Component "optimal_reward": Mean absolute difference: 1
     Component "regret": Mean relative difference: 4
     Component "cum_reward": Mean relative difference: 8
     Component "cum_regret": Mean relative difference: 3.076923
     Component "X.1": Mean absolute difference: 1
     Component "X.3": Mean relative difference: 1
     ...
    
     -- 48. Failure: History save_csv without filename (@test_history.R#37) --------
     `csv_comparison_file` not equal to `import_file`.
     Component "choice": Mean relative difference: 0.7424242
     Component "reward": Mean relative difference: 4.5
     Component "optimal_arm": Mean relative difference: 0.6373626
     Component "optimal_reward": Mean absolute difference: 1
     Component "regret": Mean relative difference: 4
     Component "cum_reward": Mean relative difference: 8
     Component "cum_regret": Mean relative difference: 3.076923
     Component "X.1": Mean absolute difference: 1
     Component "X.3": Mean relative difference: 1
     ...
    
     -- 49. Failure: History save_csv with context (@test_history.R#46) ------------
     `csv_comparison_file` not equal to `import_file`.
     Component "choice": Mean relative difference: 0.7424242
     Component "reward": Mean relative difference: 4.5
     Component "optimal_arm": Mean relative difference: 0.6373626
     Component "optimal_reward": Mean absolute difference: 1
     Component "regret": Mean relative difference: 4
     Component "cum_reward": Mean relative difference: 8
     Component "cum_regret": Mean relative difference: 3.076923
     Component "X.1": Mean absolute difference: 1
     Component "X.3": Mean relative difference: 1
     ...
    
     -- 50. Failure: History save_csv inc theta removal without filename (@test_histo
     `csv_comparison_file` not equal to `import_file`.
     Component "choice": Mean relative difference: 0.7424242
     Component "reward": Mean relative difference: 4.5
     Component "optimal_arm": Mean relative difference: 0.6373626
     Component "optimal_reward": Mean absolute difference: 1
     Component "regret": Mean relative difference: 4
     Component "cum_reward": Mean relative difference: 8
     Component "cum_regret": Mean relative difference: 3.076923
     Component "X.1": Mean absolute difference: 1
     Component "X.3": Mean relative difference: 1
     ...
    
     -- 51. Failure: History save_csv nc theta removal theta with context (@test_hist
     `csv_comparison_file` not equal to `import_file`.
     Component "choice": Mean relative difference: 0.7424242
     Component "reward": Mean relative difference: 4.5
     Component "optimal_arm": Mean relative difference: 0.6373626
     Component "optimal_reward": Mean absolute difference: 1
     Component "regret": Mean relative difference: 4
     Component "cum_reward": Mean relative difference: 8
     Component "cum_regret": Mean relative difference: 3.076923
     Component "X.1": Mean absolute difference: 1
     Component "X.3": Mean relative difference: 1
     ...
    
     -- 52. Failure: Limit agents (@test_history.R#98) -----------------------------
     capture.output(summary(history, limit_agents = c("Exp3", "UCB1"))) has changed from known value recorded in 'summary_history_limit.rds'.
     6/25 mismatches
     x[9]: " Exp3 3 3 2 1 1"
     y[9]: " Exp3 3 3 1 1 1"
    
     x[10]: " UCB1 3 3 2 0 0"
     y[10]: " UCB1 3 3 1 1 1"
    
     x[16]: " Exp3 3 3 1 1 1"
     y[16]: " Exp3 3 3 0.6666667 1.3333333 1.1547005"
    
     x[17]: " UCB1 3 3 1 0 0"
     y[17]: " UCB1 3 3 0.6666667 0.3333333 0.5773503"
    
     x[23]: " Exp3 3 3 0.3333333 0.3333333 0.3333333"
     y[23]: " Exp3 3 3 0.2222222 0.4444444 0.3849002"
    
     -- 53. Failure: ContextualLogitBTSPolicy simulation (@test_policies.R#17) -----
     history$cumulative$ContextualLogitBTS$cum_reward not equal to 6.2.
     1/1 mismatches
     [1] 5.2 - 6.2 == -1
    
     -- 54. Failure: ContextualLogitBTSPolicy simulation (@test_policies.R#18) -----
     history$cumulative$ContextualLogitBTS$cum_regret not equal to 11.6.
     1/1 mismatches
     [1] 12.5 - 11.6 == 0.9
    
     == testthat results ===========================================================
     OK: 83 SKIPPED: 1 FAILED: 54
     1. Failure: Agent (@test_agent.R#28)
     2. Failure: ContextualLinearBandit, binary_rewards = FALSE (@test_bandits.R#114)
     3. Failure: ContextualLinearBandit, binary_rewards = TRUE (@test_bandits.R#133)
     4. Failure: ContextualWheelBandit (@test_bandits.R#162)
     5. Failure: ContextualWheelBandit (@test_bandits.R#163)
     6. Failure: BasicGaussianBandit (@test_bandits.R#178)
     7. Failure: BasicBernoulliBandit MAB policies (@test_bandits.R#204)
     8. Failure: BasicBernoulliBandit MAB policies (@test_bandits.R#205)
     9. Failure: BasicBernoulliBandit MAB policies (@test_bandits.R#207)
     1. ...
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-devel-linux-x86_64-debian-clang

Version: 0.9.8.1
Check: package dependencies
Result: NOTE
    Packages suggested but not available for checking: ‘devtools’ ‘vdiffr’
Flavor: r-patched-solaris-x86

Version: 0.9.8.1
Check: tests
Result: ERROR
     Running ‘testthat.R’ [137s/34s]
    Running the tests in ‘tests/testthat.R’ failed.
    Last 13 lines of output:
     index <- index + 1L
     }
     }
     }
     sim_agent$bandit$final()
     local_history$data[t != 0]
     }
     3: e$fun(obj, substitute(ex), parent.frame(), e$data)
    
     ══ testthat results ═══════════════════════════════════════════════════════════
     OK: 134 SKIPPED: 1 FAILED: 1
     1. Error: PropensityWeightingBandit (@test_bandits.R#611)
    
     Error: testthat unit tests failed
     Execution halted
Flavor: r-oldrel-osx-x86_64