CRAN Package Check Results for Package bayesImageS

Last updated on 2018-08-19 20:50:01 CEST.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 0.5-2 54.18 300.34 354.52 OK
r-devel-linux-x86_64-debian-gcc 0.5-2 53.05 243.66 296.71 OK
r-devel-linux-x86_64-fedora-clang 0.5-2 423.51 NOTE
r-devel-linux-x86_64-fedora-gcc 0.5-2 438.04 OK
r-devel-windows-ix86+x86_64 0.5-2 139.00 341.00 480.00 NOTE
r-patched-linux-x86_64 0.5-2 63.53 304.16 367.69 OK
r-patched-solaris-x86 0.5-2 184.30 WARN
r-release-linux-x86_64 0.5-2 63.44 307.61 371.05 OK
r-release-windows-ix86+x86_64 0.5-2 146.00 362.00 508.00 NOTE
r-release-osx-x86_64 0.5-2 NOTE
r-oldrel-windows-ix86+x86_64 0.5-2 104.00 280.00 384.00 NOTE
r-oldrel-osx-x86_64 0.5-2 NOTE

Check Details

Version: 0.5-2
Check: installed package size
Result: NOTE
     installed size is 7.4Mb
     sub-directories of 1Mb or more:
     data 2.4Mb
     doc 1.0Mb
     libs 3.7Mb
Flavors: r-devel-linux-x86_64-fedora-clang, r-devel-windows-ix86+x86_64, r-release-windows-ix86+x86_64, r-release-osx-x86_64, r-oldrel-windows-ix86+x86_64, r-oldrel-osx-x86_64

Version: 0.5-2
Check: re-building of vignette outputs
Result: WARN
    Error in re-building vignettes:
     ...
    Warning in engine$weave(file, quiet = quiet, encoding = enc) :
     Pandoc (>= 1.12.3) and/or pandoc-citeproc not available. Falling back to R Markdown v1.
    Warning in engine$weave(file, quiet = quiet, encoding = enc) :
     Pandoc (>= 1.12.3) and/or pandoc-citeproc not available. Falling back to R Markdown v1.
    Loading required package: MASS
    Loading required package: mnormt
    Loading required package: gplots
    
    Attaching package: 'gplots'
    
    The following object is masked from 'package:stats':
    
     lowess
    
    Loading required package: combinat
    
    Attaching package: 'combinat'
    
    The following object is masked from 'package:utils':
    
     combn
    
    Creating a 'stanmodel' object PFAB
    Warning in system(cmd, intern = !verbose) :
     running command '/home/ripley/R/cc/bin/R CMD SHLIB file25f8247b76af.cpp 2> file25f8247b76af.cpp.err.txt' had status 1 and error message 'Illegal seek'
    "/home/ripley/R/Lib32/StanHeaders/include/stan/math/memory/stack_alloc.hpp", line 39: Error: The function "malloc" must have a prototype.
    "/home/ripley/R/Lib32/StanHeaders/include/stan/math/memory/stack_alloc.hpp", line 148: Error: The function "free" must have a prototype.
    "/home/ripley/R/Lib32/StanHeaders/include/stan/math/memory/stack_alloc.hpp", line 235: Error: The function "free" must have a prototype.
    "/tmp/RtmpgBaqwe/RLIBS_8b236ca336b/BH/include/boost/config/compiler/sunpro_cc.hpp", line 117: Warning (Anachronism): Attempt to redefine BOOST_NO_CXX11_RVALUE_REFERENCES without using #undef.
    "/tmp/RtmpgBaqwe/RLIBS_8b236ca336b/RcppEigen/include/Eigen/src/SparseCore/SparseCwiseUnaryOp.h", line 49: Error: Cannot define member of undefined specialization "unary_evaluator<CwiseUnaryOp<UnaryOp, ArgType>, Eigen::internal::IteratorBased>".
    "/tmp/RtmpgBaqwe/RLIBS_8b236ca336b/RcppEigen/include/Eigen/src/SparseCore/SparseCwiseUnaryOp.h", line 99: Error: Cannot define member of undefined specialization "unary_evaluator<CwiseUnaryView<ViewOp, ArgType>, Eigen::internal::IteratorBased>".
    "/tmp/RtmpgBaqwe/RLIBS_8b236ca336b/RcppEigen/include/Eigen/src/SparseCore/SparseBlock.h", line 503: Error: Cannot define member of undefined specialization "unary_evaluator<Block<ArgType, BlockRows, BlockCols, InnerPanel>, Eigen::internal::IteratorBased>".
    "/tmp/RtmpgBaqwe/RLIBS_8b236ca336b/RcppEigen/include/Eigen/src/SparseCore/SparseBlock.h", line 529: Error: Cannot define member of undefined specialization "unary_evaluator<Block<ArgType, BlockRows, BlockCols, InnerPanel>, Eigen::internal::IteratorBased>".
    "/tmp/RtmpgBaqwe/RLIBS_8b236ca336b/BH/include/boost/numeric/odeint/integrate/integrate_const.hpp", line 26: Warning: Last line in file "/tmp/RtmpgBaqwe/RLIBS_8b236ca336b/BH/include/boost/numeric/odeint/integrate/check_adapter.hpp" is not terminated with a newline.
    "/home/ripley/R/Lib32/rstan/include/rstan/value.hpp", line 26: Warning: rstan::value::operator() hides the virtual function stan::callbacks::writer::operator()(const std::vector<std::string>&).
    "/home/ripley/R/Lib32/rstan/include/rstan/value.hpp", line 26: Warning: rstan::value::operator() hides the virtual function stan::callbacks::writer::operator()(const std::string &).
    "/home/ripley/R/Lib32/rstan/include/rstan/value.hpp", line 26: Warning: rstan::value::operator() hides the virtual function stan::callbacks::writer::operator()().
    "/home/ripley/R/Lib32/rstan/include/rstan/values.hpp", line 50: Warning: rstan::values<Rcpp::Vector<14, PreserveStorage>>::operator() hides the virtual function stan::callbacks::writer::operator()(const std::vector<std::string>&).
    "/home/ripley/R/Lib32/rstan/include/rstan/filtered_values.hpp", line 17: Where: While specializing "rstan::values<Rcpp::Vector<14, PreserveStorage>>".
    "/home/ripley/R/Lib32/rstan/include/rstan/filtered_values.hpp", line 17: Where: Specialized in rstan::filtered_values<Rcpp::Vector<14, PreserveStorage>>.
    "/home/ripley/R/Lib32/rstan/include/rstan/rstan_writer.hpp", line 17: Where: Specialized in non-template code.
    "/home/ripley/R/Lib32/rstan/include/rstan/values.hpp", line 50: Warning: rstan::values<Rcpp::Vector<14, PreserveStorage>>::operator() hides the virtual function stan::callbacks::writer::operator()(const std::string &).
    "/home/ripley/R/Lib32/rstan/include/rstan/filtered_values.hpp", line 17: Where: While specializing "rstan::values<Rcpp::Vector<14, PreserveStorage>>".
    "/home/ripley/R/Lib32/rstan/include/rstan/filtered_values.hpp", line 17: Where: Specialized in rstan::filtered_values<Rcpp::Vector<14, PreserveStorage>>.
    "/home/ripley/R/Lib32/rstan/include/rstan/rstan_writer.hpp", line 17: Where: Specialized in non-template code.
    "/home/ripley/R/Lib32/rstan/include/rstan/values.hpp", line 50: Warning: rstan::values<Rcpp::Vector<14, PreserveStorage>>::operator() hides the virtual function stan::callbacks::writer::operator()().
    "/home/ripley/R/Lib32/rstan/include/rstan/filtered_values.hpp", line 17: Where: While specializing "rstan::values<Rcpp::Vector<14, PreserveStorage>>".
    "/home/ripley/R/Lib32/rstan/include/rstan/filtered_values.hpp", line 17: Where: Specialized in rstan::filtered_values<Rcpp::Vector<14, PreserveStorage>>.
    "/home/ripley/R/Lib32/rstan/include/rstan/rstan_writer.hpp", line 17: Where: Specialized in non-template code.
    7 Error(s) and 8 Warning(s) detected.
    make: Fatal error: Command failed for target `file25f8247b76af.o'
    Current working directory /tmp/Rtmp58aa_s
    
    ERROR(s) during compilation: source code errors or compiler configuration errors!
    
    Program source:
     1:
     2: // includes from the plugin
     3:
     4:
     5: // user includes
     6: #define STAN__SERVICES__COMMAND_HPP// Code generated by Stan version 2.17.0
     7:
     8: #include <stan/model/model_header.hpp>
     9:
     10: namespace model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e_namespace {
     11:
     12: using std::istream;
     13: using std::string;
     14: using std::stringstream;
     15: using std::vector;
     16: using stan::io::dump;
     17: using stan::math::lgamma;
     18: using stan::model::prob_grad;
     19: using namespace stan::math;
     20:
     21: typedef Eigen::Matrix<double,Eigen::Dynamic,1> vector_d;
     22: typedef Eigen::Matrix<double,1,Eigen::Dynamic> row_vector_d;
     23: typedef Eigen::Matrix<double,Eigen::Dynamic,Eigen::Dynamic> matrix_d;
     24:
     25: static int current_statement_begin__;
     26:
     27: stan::io::program_reader prog_reader__() {
     28: stan::io::program_reader reader;
     29: reader.add_event(0, 0, "start", "model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e");
     30: reader.add_event(57, 57, "end", "model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e");
     31: return reader;
     32: }
     33:
     34: template <typename T0__, typename T1__, typename T2__, typename T3__, typename T4__, typename T5__, typename T6__, typename T7__, typename T8__>
     35: Eigen::Matrix<typename boost::math::tools::promote_args<T0__, T1__, T2__, T3__, typename boost::math::tools::promote_args<T4__, T5__, T6__, T7__, typename boost::math::tools::promote_args<T8__>::type>::type>::type, Eigen::Dynamic,1>
     36: ft(const Eigen::Matrix<T0__, Eigen::Dynamic,1>& t,
     37: const T1__& tC,
     38: const T2__& e0,
     39: const T3__& ecrit,
     40: const T4__& v0,
     41: const T5__& vmaxLo,
     42: const T6__& vmaxHi,
     43: const T7__& phi1,
     44: const T8__& phi2, std::ostream* pstream__) {
     45: typedef typename boost::math::tools::promote_args<T0__, T1__, T2__, T3__, typename boost::math::tools::promote_args<T4__, T5__, T6__, T7__, typename boost::math::tools::promote_args<T8__>::type>::type>::type fun_scalar_t__;
     46: typedef fun_scalar_t__ fun_return_scalar_t__;
     47: const static bool propto__ = true;
     48: (void) propto__;
     49: fun_scalar_t__ DUMMY_VAR__(std::numeric_limits<double>::quiet_NaN());
     50: (void) DUMMY_VAR__; // suppress unused var warning
     51:
     52: int current_statement_begin__ = -1;
     53: try {
     54: {
     55: current_statement_begin__ = 3;
     56: validate_non_negative_index("mu", "num_elements(t)", num_elements(t));
     57: Eigen::Matrix<fun_scalar_t__,Eigen::Dynamic,1> mu(static_cast<Eigen::VectorXd::Index>(num_elements(t)));
     58: (void) mu; // dummy to suppress unused var warning
     59:
     60: stan::math::initialize(mu, std::numeric_limits<double>::quiet_NaN());
     61: stan::math::fill(mu,DUMMY_VAR__);
     62: current_statement_begin__ = 4;
     63: fun_scalar_t__ sqrtBcritPhi;
     64: (void) sqrtBcritPhi; // dummy to suppress unused var warning
     65:
     66: stan::math::initialize(sqrtBcritPhi, std::numeric_limits<double>::quiet_NaN());
     67: stan::math::fill(sqrtBcritPhi,DUMMY_VAR__);
     68: stan::math::assign(sqrtBcritPhi,(sqrt(tC) * phi1));
     69:
     70:
     71: current_statement_begin__ = 5;
     72: for (int i = 1; i <= num_elements(t); ++i) {
     73:
     74: current_statement_begin__ = 6;
     75: if (as_bool(logical_lte(get_base1(t,i,"t",1),tC))) {
     76: {
     77: current_statement_begin__ = 7;
     78: fun_scalar_t__ sqrtBdiffPhi;
     79: (void) sqrtBdiffPhi; // dummy to suppress unused var warning
     80:
     81: stan::math::initialize(sqrtBdiffPhi, std::numeric_limits<double>::quiet_NaN());
     82: stan::math::fill(sqrtBdiffPhi,DUMMY_VAR__);
     83: stan::math::assign(sqrtBdiffPhi,(sqrt((tC - get_base1(t,i,"t",1))) * phi1));
     84:
     85:
     86: current_statement_begin__ = 8;
     87: stan::math::assign(get_base1_lhs(mu,i,"mu",1), ((e0 + (get_base1(t,i,"t",1) * v0)) - (((2 * (vmaxLo - v0)) / pow(phi1,2)) * (((sqrtBcritPhi + 1) / exp(sqrtBcritPhi)) - ((sqrtBdiffPhi + 1) / exp(sqrtBdiffPhi))))));
     88: }
     89: } else {
     90: {
     91: current_statement_begin__ = 10;
     92: fun_scalar_t__ sqrtBdiff;
     93: (void) sqrtBdiff; // dummy to suppress unused var warning
     94:
     95: stan::math::initialize(sqrtBdiff, std::numeric_limits<double>::quiet_NaN());
     96: stan::math::fill(sqrtBdiff,DUMMY_VAR__);
     97: stan::math::assign(sqrtBdiff,sqrt((get_base1(t,i,"t",1) - tC)));
     98:
     99:
    100: current_statement_begin__ = 11;
    101: stan::math::assign(get_base1_lhs(mu,i,"mu",1), (ecrit - (((2 * vmaxHi) / phi2) * ((sqrtBdiff / exp((phi2 * sqrtBdiff))) + ((exp((-(phi2) * sqrtBdiff)) - 1) / phi2)))));
    102: }
    103: }
    104: }
    105: current_statement_begin__ = 14;
    106: return stan::math::promote_scalar<fun_return_scalar_t__>(mu);
    107: }
    108: } catch (const std::exception& e) {
    109: stan::lang::rethrow_located(e, current_statement_begin__, prog_reader__());
    110: // Next line prevents compiler griping about no return
    111: throw std::runtime_error("*** IF YOU SEE THIS, PLEASE REPORT A BUG ***");
    112: }
    113: }
    114:
    115:
    116: struct ft_functor__ {
    117: template <typename T0__, typename T1__, typename T2__, typename T3__, typename T4__, typename T5__, typename T6__, typename T7__, typename T8__>
    118: Eigen::Matrix<typename boost::math::tools::promote_args<T0__, T1__, T2__, T3__, typename boost::math::tools::promote_args<T4__, T5__, T6__, T7__, typename boost::math::tools::promote_args<T8__>::type>::type>::type, Eigen::Dynamic,1>
    119: operator()(const Eigen::Matrix<T0__, Eigen::Dynamic,1>& t,
    120: const T1__& tC,
    121: const T2__& e0,
    122: const T3__& ecrit,
    123: const T4__& v0,
    124: const T5__& vmaxLo,
    125: const T6__& vmaxHi,
    126: const T7__& phi1,
    127: const T8__& phi2, std::ostream* pstream__) const {
    128: return ft(t, tC, e0, ecrit, v0, vmaxLo, vmaxHi, phi1, phi2, pstream__);
    129: }
    130: };
    131:
    132: template <typename T0__, typename T1__, typename T2__, typename T3__, typename T4__, typename T5__, typename T6__>
    133: Eigen::Matrix<typename boost::math::tools::promote_args<T0__, T1__, T2__, T3__, typename boost::math::tools::promote_args<T4__, T5__, T6__>::type>::type, Eigen::Dynamic,1>
    134: dfdt(const Eigen::Matrix<T0__, Eigen::Dynamic,1>& t,
    135: const T1__& tC,
    136: const T2__& v0,
    137: const T3__& vmaxLo,
    138: const T4__& vmaxHi,
    139: const T5__& r1,
    140: const T6__& r2, std::ostream* pstream__) {
    141: typedef typename boost::math::tools::promote_args<T0__, T1__, T2__, T3__, typename boost::math::tools::promote_args<T4__, T5__, T6__>::type>::type fun_scalar_t__;
    142: typedef fun_scalar_t__ fun_return_scalar_t__;
    143: const static bool propto__ = true;
    144: (void) propto__;
    145: fun_scalar_t__ DUMMY_VAR__(std::numeric_limits<double>::quiet_NaN());
    146: (void) DUMMY_VAR__; // suppress unused var warning
    147:
    148: int current_statement_begin__ = -1;
    149: try {
    150: {
    151: current_statement_begin__ = 18;
    152: validate_non_negative_index("dmu", "num_elements(t)", num_elements(t));
    153: Eigen::Matrix<fun_scalar_t__,Eigen::Dynamic,1> dmu(static_cast<Eigen::VectorXd::Index>(num_elements(t)));
    154: (void) dmu; // dummy to suppress unused var warning
    155:
    156: stan::math::initialize(dmu, std::numeric_limits<double>::quiet_NaN());
    157: stan::math::fill(dmu,DUMMY_VAR__);
    158:
    159:
    160: current_statement_begin__ = 19;
    161: for (int i = 1; i <= num_elements(t); ++i) {
    162:
    163: current_statement_begin__ = 20;
    164: if (as_bool(logical_lte(get_base1(t,i,"t",1),tC))) {
    165:
    166: current_statement_begin__ = 21;
    167: stan::math::assign(get_base1_lhs(dmu,i,"dmu",1), (v0 + ((vmaxLo - v0) * exp((-(r1) * sqrt((tC - get_base1(t,i,"t",1))))))));
    168: } else {
    169:
    170: current_statement_begin__ = 23;
    171: stan::math::assign(get_base1_lhs(dmu,i,"dmu",1), (vmaxHi * exp((-(r2) * sqrt((get_base1(t,i,"t",1) - tC))))));
    172: }
    173: }
    174: current_statement_begin__ = 26;
    175: return stan::math::promote_scalar<fun_return_scalar_t__>(dmu);
    176: }
    177: } catch (const std::exception& e) {
    178: stan::lang::rethrow_located(e, current_statement_begin__, prog_reader__());
    179: // Next line prevents compiler griping about no return
    180: throw std::runtime_error("*** IF YOU SEE THIS, PLEASE REPORT A BUG ***");
    181: }
    182: }
    183:
    184:
    185: struct dfdt_functor__ {
    186: template <typename T0__, typename T1__, typename T2__, typename T3__, typename T4__, typename T5__, typename T6__>
    187: Eigen::Matrix<typename boost::math::tools::promote_args<T0__, T1__, T2__, T3__, typename boost::math::tools::promote_args<T4__, T5__, T6__>::type>::type, Eigen::Dynamic,1>
    188: operator()(const Eigen::Matrix<T0__, Eigen::Dynamic,1>& t,
    189: const T1__& tC,
    190: const T2__& v0,
    191: const T3__& vmaxLo,
    192: const T4__& vmaxHi,
    193: const T5__& r1,
    194: const T6__& r2, std::ostream* pstream__) const {
    195: return dfdt(t, tC, v0, vmaxLo, vmaxHi, r1, r2, pstream__);
    196: }
    197: };
    198:
    199: class model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e : public prob_grad {
    200: private:
    201: int M;
    202: int N;
    203: double maxY;
    204: double Vlim;
    205: double e0;
    206: double v0;
    207: double tcrit;
    208: matrix_d y;
    209: vector_d t;
    210: public:
    211: model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e(stan::io::var_context& context__,
    212: std::ostream* pstream__ = 0)
    213: : prob_grad(0) {
    214: ctor_body(context__, 0, pstream__);
    215: }
    216:
    217: model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e(stan::io::var_context& context__,
    218: unsigned int random_seed__,
    219: std::ostream* pstream__ = 0)
    220: : prob_grad(0) {
    221: ctor_body(context__, random_seed__, pstream__);
    222: }
    223:
    224: void ctor_body(stan::io::var_context& context__,
    225: unsigned int random_seed__,
    226: std::ostream* pstream__) {
    227: boost::ecuyer1988 base_rng__ =
    228: stan::services::util::create_rng(random_seed__, 0);
    229: (void) base_rng__; // suppress unused var warning
    230:
    231: current_statement_begin__ = -1;
    232:
    233: static const char* function__ = "model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e_namespace::model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e";
    234: (void) function__; // dummy to suppress unused var warning
    235: size_t pos__;
    236: (void) pos__; // dummy to suppress unused var warning
    237: std::vector<int> vals_i__;
    238: std::vector<double> vals_r__;
    239: double DUMMY_VAR__(std::numeric_limits<double>::quiet_NaN());
    240: (void) DUMMY_VAR__; // suppress unused var warning
    241:
    242: // initialize member variables
    243: try {
    244: current_statement_begin__ = 30;
    245: context__.validate_dims("data initialization", "M", "int", context__.to_vec());
    246: M = int(0);
    247: vals_i__ = context__.vals_i("M");
    248: pos__ = 0;
    249: M = vals_i__[pos__++];
    250: current_statement_begin__ = 31;
    251: context__.validate_dims("data initialization", "N", "int", context__.to_vec());
    252: N = int(0);
    253: vals_i__ = context__.vals_i("N");
    254: pos__ = 0;
    255: N = vals_i__[pos__++];
    256: current_statement_begin__ = 32;
    257: context__.validate_dims("data initialization", "maxY", "double", context__.to_vec());
    258: maxY = double(0);
    259: vals_r__ = context__.vals_r("maxY");
    260: pos__ = 0;
    261: maxY = vals_r__[pos__++];
    262: current_statement_begin__ = 33;
    263: context__.validate_dims("data initialization", "Vlim", "double", context__.to_vec());
    264: Vlim = double(0);
    265: vals_r__ = context__.vals_r("Vlim");
    266: pos__ = 0;
    267: Vlim = vals_r__[pos__++];
    268: current_statement_begin__ = 34;
    269: context__.validate_dims("data initialization", "e0", "double", context__.to_vec());
    270: e0 = double(0);
    271: vals_r__ = context__.vals_r("e0");
    272: pos__ = 0;
    273: e0 = vals_r__[pos__++];
    274: current_statement_begin__ = 35;
    275: context__.validate_dims("data initialization", "v0", "double", context__.to_vec());
    276: v0 = double(0);
    277: vals_r__ = context__.vals_r("v0");
    278: pos__ = 0;
    279: v0 = vals_r__[pos__++];
    280: current_statement_begin__ = 36;
    281: context__.validate_dims("data initialization", "tcrit", "double", context__.to_vec());
    282: tcrit = double(0);
    283: vals_r__ = context__.vals_r("tcrit");
    284: pos__ = 0;
    285: tcrit = vals_r__[pos__++];
    286: current_statement_begin__ = 37;
    287: validate_non_negative_index("y", "M", M);
    288: validate_non_negative_index("y", "N", N);
    289: context__.validate_dims("data initialization", "y", "matrix_d", context__.to_vec(M,N));
    290: validate_non_negative_index("y", "M", M);
    291: validate_non_negative_index("y", "N", N);
    292: y = matrix_d(static_cast<Eigen::VectorXd::Index>(M),static_cast<Eigen::VectorXd::Index>(N));
    293: vals_r__ = context__.vals_r("y");
    294: pos__ = 0;
    295: size_t y_m_mat_lim__ = M;
    296: size_t y_n_mat_lim__ = N;
    297: for (size_t n_mat__ = 0; n_mat__ < y_n_mat_lim__; ++n_mat__) {
    298: for (size_t m_mat__ = 0; m_mat__ < y_m_mat_lim__; ++m_mat__) {
    299: y(m_mat__,n_mat__) = vals_r__[pos__++];
    300: }
    301: }
    302: current_statement_begin__ = 38;
    303: validate_non_negative_index("t", "M", M);
    304: context__.validate_dims("data initialization", "t", "vector_d", context__.to_vec(M));
    305: validate_non_negative_index("t", "M", M);
    306: t = vector_d(static_cast<Eigen::VectorXd::Index>(M));
    307: vals_r__ = context__.vals_r("t");
    308: pos__ = 0;
    309: size_t t_i_vec_lim__ = M;
    310: for (size_t i_vec__ = 0; i_vec__ < t_i_vec_lim__; ++i_vec__) {
    311: t[i_vec__] = vals_r__[pos__++];
    312: }
    313:
    314: // validate, data variables
    315: current_statement_begin__ = 30;
    316: check_greater_or_equal(function__,"M",M,1);
    317: current_statement_begin__ = 31;
    318: check_greater_or_equal(function__,"N",N,1);
    319: current_statement_begin__ = 32;
    320: check_greater_or_equal(function__,"maxY",maxY,1);
    321: current_statement_begin__ = 33;
    322: check_greater_or_equal(function__,"Vlim",Vlim,1);
    323: current_statement_begin__ = 34;
    324: check_greater_or_equal(function__,"e0",e0,0);
    325: current_statement_begin__ = 35;
    326: check_greater_or_equal(function__,"v0",v0,0);
    327: current_statement_begin__ = 36;
    328: current_statement_begin__ = 37;
    329: check_greater_or_equal(function__,"y",y,0);
    330: check_less_or_equal(function__,"y",y,maxY);
    331: current_statement_begin__ = 38;
    332: // initialize data variables
    333:
    334:
    335: // validate transformed data
    336:
    337: // validate, set parameter ranges
    338: num_params_r__ = 0U;
    339: param_ranges_i__.clear();
    340: current_statement_begin__ = 41;
    341: ++num_params_r__;
    342: current_statement_begin__ = 42;
    343: ++num_params_r__;
    344: current_statement_begin__ = 43;
    345: ++num_params_r__;
    346: current_statement_begin__ = 44;
    347: ++num_params_r__;
    348: current_statement_begin__ = 45;
    349: ++num_params_r__;
    350: } catch (const std::exception& e) {
    351: stan::lang::rethrow_located(e, current_statement_begin__, prog_reader__());
    352: // Next line prevents compiler griping about no return
    353: throw std::runtime_error("*** IF YOU SEE THIS, PLEASE REPORT A BUG ***");
    354: }
    355: }
    356:
    357: ~model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e() { }
    358:
    359:
    360: void transform_inits(const stan::io::var_context& context__,
    361: std::vector<int>& params_i__,
    362: std::vector<double>& params_r__,
    363: std::ostream* pstream__) const {
    364: stan::io::writer<double> writer__(params_r__,params_i__);
    365: size_t pos__;
    366: (void) pos__; // dummy call to supress warning
    367: std::vector<double> vals_r__;
    368: std::vector<int> vals_i__;
    369:
    370: if (!(context__.contains_r("a")))
    371: throw std::runtime_error("variable a missing");
    372: vals_r__ = context__.vals_r("a");
    373: pos__ = 0U;
    374: context__.validate_dims("initialization", "a", "double", context__.to_vec());
    375: double a(0);
    376: a = vals_r__[pos__++];
    377: try {
    378: writer__.scalar_lb_unconstrain(0,a);
    379: } catch (const std::exception& e) {
    380: throw std::runtime_error(std::string("Error transforming variable a: ") + e.what());
    381: }
    382:
    383: if (!(context__.contains_r("b")))
    384: throw std::runtime_error("variable b missing");
    385: vals_r__ = context__.vals_r("b");
    386: pos__ = 0U;
    387: context__.validate_dims("initialization", "b", "double", context__.to_vec());
    388: double b(0);
    389: b = vals_r__[pos__++];
    390: try {
    391: writer__.scalar_lb_unconstrain(0,b);
    392: } catch (const std::exception& e) {
    393: throw std::runtime_error(std::string("Error transforming variable b: ") + e.what());
    394: }
    395:
    396: if (!(context__.contains_r("ecrit")))
    397: throw std::runtime_error("variable ecrit missing");
    398: vals_r__ = context__.vals_r("ecrit");
    399: pos__ = 0U;
    400: context__.validate_dims("initialization", "ecrit", "double", context__.to_vec());
    401: double ecrit(0);
    402: ecrit = vals_r__[pos__++];
    403: try {
    404: writer__.scalar_lub_unconstrain(e0,maxY,ecrit);
    405: } catch (const std::exception& e) {
    406: throw std::runtime_error(std::string("Error transforming variable ecrit: ") + e.what());
    407: }
    408:
    409: if (!(context__.contains_r("vmaxLo")))
    410: throw std::runtime_error("variable vmaxLo missing");
    411: vals_r__ = context__.vals_r("vmaxLo");
    412: pos__ = 0U;
    413: context__.validate_dims("initialization", "vmaxLo", "double", context__.to_vec());
    414: double vmaxLo(0);
    415: vmaxLo = vals_r__[pos__++];
    416: try {
    417: writer__.scalar_lub_unconstrain(0,Vlim,vmaxLo);
    418: } catch (const std::exception& e) {
    419: throw std::runtime_error(std::string("Error transforming variable vmaxLo: ") + e.what());
    420: }
    421:
    422: if (!(context__.contains_r("vmaxHi")))
    423: throw std::runtime_error("variable vmaxHi missing");
    424: vals_r__ = context__.vals_r("vmaxHi");
    425: pos__ = 0U;
    426: context__.validate_dims("initialization", "vmaxHi", "double", context__.to_vec());
    427: double vmaxHi(0);
    428: vmaxHi = vals_r__[pos__++];
    429: try {
    430: writer__.scalar_lub_unconstrain(0,Vlim,vmaxHi);
    431: } catch (const std::exception& e) {
    432: throw std::runtime_error(std::string("Error transforming variable vmaxHi: ") + e.what());
    433: }
    434:
    435: params_r__ = writer__.data_r();
    436: params_i__ = writer__.data_i();
    437: }
    438:
    439: void transform_inits(const stan::io::var_context& context,
    440: Eigen::Matrix<double,Eigen::Dynamic,1>& params_r,
    441: std::ostream* pstream__) const {
    442: std::vector<double> params_r_vec;
    443: std::vector<int> params_i_vec;
    444: transform_inits(context, params_i_vec, params_r_vec, pstream__);
    445: params_r.resize(params_r_vec.size());
    446: for (int i = 0; i < params_r.size(); ++i)
    447: params_r(i) = params_r_vec[i];
    448: }
    449:
    450:
    451: template <bool propto__, bool jacobian__, typename T__>
    452: T__ log_prob(vector<T__>& params_r__,
    453: vector<int>& params_i__,
    454: std::ostream* pstream__ = 0) const {
    455:
    456: T__ DUMMY_VAR__(std::numeric_limits<double>::quiet_NaN());
    457: (void) DUMMY_VAR__; // suppress unused var warning
    458:
    459: T__ lp__(0.0);
    460: stan::math::accumulator<T__> lp_accum__;
    461:
    462: try {
    463: // model parameters
    464: stan::io::reader<T__> in__(params_r__,params_i__);
    465:
    466: T__ a;
    467: (void) a; // dummy to suppress unused var warning
    468: if (jacobian__)
    469: a = in__.scalar_lb_constrain(0,lp__);
    470: else
    471: a = in__.scalar_lb_constrain(0);
    472:
    473: T__ b;
    474: (void) b; // dummy to suppress unused var warning
    475: if (jacobian__)
    476: b = in__.scalar_lb_constrain(0,lp__);
    477: else
    478: b = in__.scalar_lb_constrain(0);
    479:
    480: T__ ecrit;
    481: (void) ecrit; // dummy to suppress unused var warning
    482: if (jacobian__)
    483: ecrit = in__.scalar_lub_constrain(e0,maxY,lp__);
    484: else
    485: ecrit = in__.scalar_lub_constrain(e0,maxY);
    486:
    487: T__ vmaxLo;
    488: (void) vmaxLo; // dummy to suppress unused var warning
    489: if (jacobian__)
    490: vmaxLo = in__.scalar_lub_constrain(0,Vlim,lp__);
    491: else
    492: vmaxLo = in__.scalar_lub_constrain(0,Vlim);
    493:
    494: T__ vmaxHi;
    495: (void) vmaxHi; // dummy to suppress unused var warning
    496: if (jacobian__)
    497: vmaxHi = in__.scalar_lub_constrain(0,Vlim,lp__);
    498: else
    499: vmaxHi = in__.scalar_lub_constrain(0,Vlim);
    500:
    501:
    502: // transformed parameters
    503: current_statement_begin__ = 48;
    504: validate_non_negative_index("curr_mu", "M", M);
    505: Eigen::Matrix<T__,Eigen::Dynamic,1> curr_mu(static_cast<Eigen::VectorXd::Index>(M));
    506: (void) curr_mu; // dummy to suppress unused var warning
    507:
    508: stan::math::initialize(curr_mu, DUMMY_VAR__);
    509: stan::math::fill(curr_mu,DUMMY_VAR__);
    510: current_statement_begin__ = 49;
    511: validate_non_negative_index("curr_var", "M", M);
    512: Eigen::Matrix<T__,Eigen::Dynamic,1> curr_var(static_cast<Eigen::VectorXd::Index>(M));
    513: (void) curr_var; // dummy to suppress unused var warning
    514:
    515: stan::math::initialize(curr_var, DUMMY_VAR__);
    516: stan::math::fill(curr_var,DUMMY_VAR__);
    517:
    518:
    519: current_statement_begin__ = 50;
    520: stan::math::assign(curr_mu, ft(t,tcrit,e0,ecrit,v0,vmaxLo,vmaxHi,a,b, pstream__));
    521: current_statement_begin__ = 51;
    522: stan::math::assign(curr_var, dfdt(t,tcrit,v0,vmaxLo,vmaxHi,a,b, pstream__));
    523:
    524: // validate transformed parameters
    525: for (int i0__ = 0; i0__ < M; ++i0__) {
    526: if (stan::math::is_uninitialized(curr_mu(i0__))) {
    527: std::stringstream msg__;
    528: msg__ << "Undefined transformed parameter: curr_mu" << '[' << i0__ << ']';
    529: throw std::runtime_error(msg__.str());
    530: }
    531: }
    532: for (int i0__ = 0; i0__ < M; ++i0__) {
    533: if (stan::math::is_uninitialized(curr_var(i0__))) {
    534: std::stringstream msg__;
    535: msg__ << "Undefined transformed parameter: curr_var" << '[' << i0__ << ']';
    536: throw std::runtime_error(msg__.str());
    537: }
    538: }
    539:
    540: const char* function__ = "validate transformed params";
    541: (void) function__; // dummy to suppress unused var warning
    542: current_statement_begin__ = 48;
    543: current_statement_begin__ = 49;
    544:
    545: // model body
    546:
    547: current_statement_begin__ = 54;
    548: for (int i = 1; i <= M; ++i) {
    549:
    550: current_statement_begin__ = 55;
    551: lp_accum__.add(normal_log<propto__>(stan::model::rvalue(y, stan::model::cons_list(stan::model::index_uni(i), stan::model::cons_list(stan::model::index_omni(), stan::model::nil_index_list())), "y"), get_base1(curr_mu,i,"curr_mu",1), sqrt(get_base1(curr_var,i,"curr_var",1))));
    552: }
    553:
    554: } catch (const std::exception& e) {
    555: stan::lang::rethrow_located(e, current_statement_begin__, prog_reader__());
    556: // Next line prevents compiler griping about no return
    557: throw std::runtime_error("*** IF YOU SEE THIS, PLEASE REPORT A BUG ***");
    558: }
    559:
    560: lp_accum__.add(lp__);
    561: return lp_accum__.sum();
    562:
    563: } // log_prob()
    564:
    565: template <bool propto, bool jacobian, typename T_>
    566: T_ log_prob(Eigen::Matrix<T_,Eigen::Dynamic,1>& params_r,
    567: std::ostream* pstream = 0) const {
    568: std::vector<T_> vec_params_r;
    569: vec_params_r.reserve(params_r.size());
    570: for (int i = 0; i < params_r.size(); ++i)
    571: vec_params_r.push_back(params_r(i));
    572: std::vector<int> vec_params_i;
    573: return log_prob<propto,jacobian,T_>(vec_params_r, vec_params_i, pstream);
    574: }
    575:
    576:
    577: void get_param_names(std::vector<std::string>& names__) const {
    578: names__.resize(0);
    579: names__.push_back("a");
    580: names__.push_back("b");
    581: names__.push_back("ecrit");
    582: names__.push_back("vmaxLo");
    583: names__.push_back("vmaxHi");
    584: names__.push_back("curr_mu");
    585: names__.push_back("curr_var");
    586: }
    587:
    588:
    589: void get_dims(std::vector<std::vector<size_t> >& dimss__) const {
    590: dimss__.resize(0);
    591: std::vector<size_t> dims__;
    592: dims__.resize(0);
    593: dimss__.push_back(dims__);
    594: dims__.resize(0);
    595: dimss__.push_back(dims__);
    596: dims__.resize(0);
    597: dimss__.push_back(dims__);
    598: dims__.resize(0);
    599: dimss__.push_back(dims__);
    600: dims__.resize(0);
    601: dimss__.push_back(dims__);
    602: dims__.resize(0);
    603: dims__.push_back(M);
    604: dimss__.push_back(dims__);
    605: dims__.resize(0);
    606: dims__.push_back(M);
    607: dimss__.push_back(dims__);
    608: }
    609:
    610: template <typename RNG>
    611: void write_array(RNG& base_rng__,
    612: std::vector<double>& params_r__,
    613: std::vector<int>& params_i__,
    614: std::vector<double>& vars__,
    615: bool include_tparams__ = true,
    616: bool include_gqs__ = true,
    617: std::ostream* pstream__ = 0) const {
    618: vars__.resize(0);
    619: stan::io::reader<double> in__(params_r__,params_i__);
    620: static const char* function__ = "model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e_namespace::write_array";
    621: (void) function__; // dummy to suppress unused var warning
    622: // read-transform, write parameters
    623: double a = in__.scalar_lb_constrain(0);
    624: double b = in__.scalar_lb_constrain(0);
    625: double ecrit = in__.scalar_lub_constrain(e0,maxY);
    626: double vmaxLo = in__.scalar_lub_constrain(0,Vlim);
    627: double vmaxHi = in__.scalar_lub_constrain(0,Vlim);
    628: vars__.push_back(a);
    629: vars__.push_back(b);
    630: vars__.push_back(ecrit);
    631: vars__.push_back(vmaxLo);
    632: vars__.push_back(vmaxHi);
    633:
    634: if (!include_tparams__) return;
    635: // declare and define transformed parameters
    636: double lp__ = 0.0;
    637: (void) lp__; // dummy to suppress unused var warning
    638: stan::math::accumulator<double> lp_accum__;
    639:
    640: double DUMMY_VAR__(std::numeric_limits<double>::quiet_NaN());
    641: (void) DUMMY_VAR__; // suppress unused var warning
    642:
    643: try {
    644: current_statement_begin__ = 48;
    645: validate_non_negative_index("curr_mu", "M", M);
    646: vector_d curr_mu(static_cast<Eigen::VectorXd::Index>(M));
    647: (void) curr_mu; // dummy to suppress unused var warning
    648:
    649: stan::math::initialize(curr_mu, std::numeric_limits<double>::quiet_NaN());
    650: stan::math::fill(curr_mu,DUMMY_VAR__);
    651: current_statement_begin__ = 49;
    652: validate_non_negative_index("curr_var", "M", M);
    653: vector_d curr_var(static_cast<Eigen::VectorXd::Index>(M));
    654: (void) curr_var; // dummy to suppress unused var warning
    655:
    656: stan::math::initialize(curr_var, std::numeric_limits<double>::quiet_NaN());
    657: stan::math::fill(curr_var,DUMMY_VAR__);
    658:
    659:
    660: current_statement_begin__ = 50;
    661: stan::math::assign(curr_mu, ft(t,tcrit,e0,ecrit,v0,vmaxLo,vmaxHi,a,b, pstream__));
    662: current_statement_begin__ = 51;
    663: stan::math::assign(curr_var, dfdt(t,tcrit,v0,vmaxLo,vmaxHi,a,b, pstream__));
    664:
    665: // validate transformed parameters
    666: current_statement_begin__ = 48;
    667: current_statement_begin__ = 49;
    668:
    669: // write transformed parameters
    670: for (int k_0__ = 0; k_0__ < M; ++k_0__) {
    671: vars__.push_back(curr_mu[k_0__]);
    672: }
    673: for (int k_0__ = 0; k_0__ < M; ++k_0__) {
    674: vars__.push_back(curr_var[k_0__]);
    675: }
    676:
    677: if (!include_gqs__) return;
    678: // declare and define generated quantities
    679:
    680:
    681:
    682: // validate generated quantities
    683:
    684: // write generated quantities
    685: } catch (const std::exception& e) {
    686: stan::lang::rethrow_located(e, current_statement_begin__, prog_reader__());
    687: // Next line prevents compiler griping about no return
    688: throw std::runtime_error("*** IF YOU SEE THIS, PLEASE REPORT A BUG ***");
    689: }
    690: }
    691:
    692: template <typename RNG>
    693: void write_array(RNG& base_rng,
    694: Eigen::Matrix<double,Eigen::Dynamic,1>& params_r,
    695: Eigen::Matrix<double,Eigen::Dynamic,1>& vars,
    696: bool include_tparams = true,
    697: bool include_gqs = true,
    698: std::ostream* pstream = 0) const {
    699: std::vector<double> params_r_vec(params_r.size());
    700: for (int i = 0; i < params_r.size(); ++i)
    701: params_r_vec[i] = params_r(i);
    702: std::vector<double> vars_vec;
    703: std::vector<int> params_i_vec;
    704: write_array(base_rng,params_r_vec,params_i_vec,vars_vec,include_tparams,include_gqs,pstream);
    705: vars.resize(vars_vec.size());
    706: for (int i = 0; i < vars.size(); ++i)
    707: vars(i) = vars_vec[i];
    708: }
    709:
    710: static std::string model_name() {
    711: return "model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e";
    712: }
    713:
    714:
    715: void constrained_param_names(std::vector<std::string>& param_names__,
    716: bool include_tparams__ = true,
    717: bool include_gqs__ = true) const {
    718: std::stringstream param_name_stream__;
    719: param_name_stream__.str(std::string());
    720: param_name_stream__ << "a";
    721: param_names__.push_back(param_name_stream__.str());
    722: param_name_stream__.str(std::string());
    723: param_name_stream__ << "b";
    724: param_names__.push_back(param_name_stream__.str());
    725: param_name_stream__.str(std::string());
    726: param_name_stream__ << "ecrit";
    727: param_names__.push_back(param_name_stream__.str());
    728: param_name_stream__.str(std::string());
    729: param_name_stream__ << "vmaxLo";
    730: param_names__.push_back(param_name_stream__.str());
    731: param_name_stream__.str(std::string());
    732: param_name_stream__ << "vmaxHi";
    733: param_names__.push_back(param_name_stream__.str());
    734:
    735: if (!include_gqs__ && !include_tparams__) return;
    736: for (int k_0__ = 1; k_0__ <= M; ++k_0__) {
    737: param_name_stream__.str(std::string());
    738: param_name_stream__ << "curr_mu" << '.' << k_0__;
    739: param_names__.push_back(param_name_stream__.str());
    740: }
    741: for (int k_0__ = 1; k_0__ <= M; ++k_0__) {
    742: param_name_stream__.str(std::string());
    743: param_name_stream__ << "curr_var" << '.' << k_0__;
    744: param_names__.push_back(param_name_stream__.str());
    745: }
    746:
    747: if (!include_gqs__) return;
    748: }
    749:
    750:
    751: void unconstrained_param_names(std::vector<std::string>& param_names__,
    752: bool include_tparams__ = true,
    753: bool include_gqs__ = true) const {
    754: std::stringstream param_name_stream__;
    755: param_name_stream__.str(std::string());
    756: param_name_stream__ << "a";
    757: param_names__.push_back(param_name_stream__.str());
    758: param_name_stream__.str(std::string());
    759: param_name_stream__ << "b";
    760: param_names__.push_back(param_name_stream__.str());
    761: param_name_stream__.str(std::string());
    762: param_name_stream__ << "ecrit";
    763: param_names__.push_back(param_name_stream__.str());
    764: param_name_stream__.str(std::string());
    765: param_name_stream__ << "vmaxLo";
    766: param_names__.push_back(param_name_stream__.str());
    767: param_name_stream__.str(std::string());
    768: param_name_stream__ << "vmaxHi";
    769: param_names__.push_back(param_name_stream__.str());
    770:
    771: if (!include_gqs__ && !include_tparams__) return;
    772: for (int k_0__ = 1; k_0__ <= M; ++k_0__) {
    773: param_name_stream__.str(std::string());
    774: param_name_stream__ << "curr_mu" << '.' << k_0__;
    775: param_names__.push_back(param_name_stream__.str());
    776: }
    777: for (int k_0__ = 1; k_0__ <= M; ++k_0__) {
    778: param_name_stream__.str(std::string());
    779: param_name_stream__ << "curr_var" << '.' << k_0__;
    780: param_names__.push_back(param_name_stream__.str());
    781: }
    782:
    783: if (!include_gqs__) return;
    784: }
    785:
    786: }; // model
    787:
    788: }
    789:
    790: typedef model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e_namespace::model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e stan_model;
    791:
    792: #include <rstan/rstaninc.hpp>
    793: /**
    794: * Define Rcpp Module to expose stan_fit's functions to R.
    795: */
    796: RCPP_MODULE(stan_fit4model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e_mod){
    797: Rcpp::class_<rstan::stan_fit<model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e_namespace::model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e,
    798: boost::random::ecuyer1988> >("stan_fit4model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e")
    799: // .constructor<Rcpp::List>()
    800: .constructor<SEXP, SEXP, SEXP>()
    801: // .constructor<SEXP, SEXP>()
    802: .method("call_sampler",
    803: &rstan::stan_fit<model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e_namespace::model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e, boost::random::ecuyer1988>::call_sampler)
    804: .method("param_names",
    805: &rstan::stan_fit<model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e_namespace::model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e, boost::random::ecuyer1988>::param_names)
    806: .method("param_names_oi",
    807: &rstan::stan_fit<model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e_namespace::model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e, boost::random::ecuyer1988>::param_names_oi)
    808: .method("param_fnames_oi",
    809: &rstan::stan_fit<model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e_namespace::model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e, boost::random::ecuyer1988>::param_fnames_oi)
    810: .method("param_dims",
    811: &rstan::stan_fit<model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e_namespace::model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e, boost::random::ecuyer1988>::param_dims)
    812: .method("param_dims_oi",
    813: &rstan::stan_fit<model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e_namespace::model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e, boost::random::ecuyer1988>::param_dims_oi)
    814: .method("update_param_oi",
    815: &rstan::stan_fit<model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e_namespace::model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e, boost::random::ecuyer1988>::update_param_oi)
    816: .method("param_oi_tidx",
    817: &rstan::stan_fit<model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e_namespace::model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e, boost::random::ecuyer1988>::param_oi_tidx)
    818: .method("grad_log_prob",
    819: &rstan::stan_fit<model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e_namespace::model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e, boost::random::ecuyer1988>::grad_log_prob)
    820: .method("log_prob",
    821: &rstan::stan_fit<model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e_namespace::model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e, boost::random::ecuyer1988>::log_prob)
    822: .method("unconstrain_pars",
    823: &rstan::stan_fit<model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e_namespace::model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e, boost::random::ecuyer1988>::unconstrain_pars)
    824: .method("constrain_pars",
    825: &rstan::stan_fit<model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e_namespace::model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e, boost::random::ecuyer1988>::constrain_pars)
    826: .method("num_pars_unconstrained",
    827: &rstan::stan_fit<model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e_namespace::model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e, boost::random::ecuyer1988>::num_pars_unconstrained)
    828: .method("unconstrained_param_names",
    829: &rstan::stan_fit<model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e_namespace::model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e, boost::random::ecuyer1988>::unconstrained_param_names)
    830: .method("constrained_param_names",
    831: &rstan::stan_fit<model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e_namespace::model25f87f048f5_ce4ad0e5c983e908158162ec1e0e6d4e, boost::random::ecuyer1988>::constrained_param_names)
    832: ;
    833: }
    834:
    835: // declarations
    836: extern "C" {
    837: SEXP file25f8247b76af( ) ;
    838: }
    839:
    840: // definition
    841:
    842: SEXP file25f8247b76af( ){
    843: return Rcpp::wrap("ce4ad0e5c983e908158162ec1e0e6d4e");
    844: }
    845:
    846:
    Quitting from lines 98-155 (PFAB.Rmd)
    Error: processing vignette 'PFAB.Rmd' failed with diagnostics:
    Compilation ERROR, function(s)/method(s) not created! "/home/ripley/R/Lib32/StanHeaders/include/stan/math/memory/stack_alloc.hpp", line 39: Error: The function "malloc" must have a prototype.
    "/home/ripley/R/Lib32/StanHeaders/include/stan/math/memory/stack_alloc.hpp", line 148: Error: The function "free" must have a prototype.
    "/home/ripley/R/Lib32/StanHeaders/include/stan/math/memory/stack_alloc.hpp", line 235: Error: The function "free" must have a prototype.
    "/tmp/RtmpgBaqwe/RLIBS_8b236ca336b/BH/include/boost/config/compiler/sunpro_cc.hpp", line 117: Warning (Anachronism): Attempt to redefine BOOST_NO_CXX11_RVALUE_REFERENCES without using #undef.
    "/tmp/RtmpgBaqwe/RLIBS_8b236ca336b/RcppEigen/include/Eigen/src/SparseCore/SparseCwiseUnaryOp.h", line 49: Error: Cannot define member of undefined specialization "unary_evaluator<CwiseUnaryOp<UnaryOp, ArgType>, Eigen::internal::IteratorBased>".
    "/tmp/RtmpgBaqwe/RLIBS_8b236ca336
    Execution halted
Flavor: r-patched-solaris-x86