This page is part of a code repository to reproduce the example analyses in the manuscript “Cross validation for model selection: a primer with examples from ecology” by L.A. Yates, Z. Aandahl, S.A.Richards, and B.W.Brook (2022). This tutorial is designed to be used alongside the manuscript.
The pinfish
dataset is sourced from the R package
fishmethods
and contains length, age and sex data for
pinfish (Lagodon rhomboides) from Tampa Bay, Florida. The aim
of the analysis is to determine which allometric growth function best
describes the relationship between the available \({\sf length}\) and \({\sf age}\) measurements of pinfish, while
accounting for the effect of {sex} and {haul} on the model parameters
(the processed data comprise measurements from 45 separate fishery
hauls). We fit all models in Bayesian framework using Markov Chain Monte
Carlo (MCMC) sampling as implemented in the R package \(\texttt{brms}\)
Data preparations include: removing observations with unknown sex, setting a minimum haul size, and removing one outlier. These preparations are made to simplify the analysis so we can focus on model-selection aspects, but it would be possible to retain all of these data predicting to unknown sex and small hauls using the grand mean in lieu of group-level predictions and using robust regression to deal with outliers.
library(fishmethods)
data(pinfish) # 670 obs
names(pinfish)[1] <- "haul"
# remove samples with undetermined sex
fishData <- pinfish %>% filter(sex != 0) %>% mutate(sex = factor(sex)) %>% as_tibble
# restrict minimum haul size
fishData %>% group_by(haul) %>% summarise(n = n()) %>% pull(n) %>% table()
## .
## 1 2 3 4 5 6 7 8 9 10 11 12 14 16 17 19 20 23 25 27 31 38
## 9 8 8 5 6 2 3 4 5 10 1 2 1 1 2 1 2 2 1 1 1 1
min.size <- 5 # set minimum haul size
fishData <- fishData %>% group_by(haul) %>% filter(n()>= min.size) %>% ungroup
# remove outlier
fishData <- fishData %>% filter(haul != "TBD930066") %>% mutate(haul = factor(haul))
fishData
## # A tibble: 534 × 4
## haul sl age sex
## <fct> <int> <dbl> <fct>
## 1 BFC970009 139 1.26 1
## 2 BFC970009 120 2.26 2
## 3 BFC970009 120 1.26 2
## 4 BFC970009 116 1.26 1
## 5 BFC970009 117 1.26 1
## 6 BFC970009 129 2.26 1
## 7 BFC970009 126 1.26 2
## 8 BFC970009 113 1.26 1
## 9 BFC970009 118 1.26 1
## 10 BFC970009 124 1.26 1
## # … with 524 more rows
We take as candidate growth models the following commonly used non-linear functions:
\[\begin{equation} \begin{array}{ll} \mbox{Gompertz (G)} &L_{\mathrm{G}}(a) = L_0 e^{-e^{(-K(a - t_0))}}\\ \mbox{logistic (log)} &L_{\mathrm{log}}(a) = {L_0}/(1 + e^{-K(a - t_0)})\\ \mbox{von Bertalanffy (vB)} \qquad\qquad &L_{\mathrm{vB}}(a) = L_0(1 - e^{-K(a - t_0)}), \end{array} \end{equation}\]
To implement these in \(\texttt{R}\), we ‘hard-code’ initial values
for the parameters \(L_0=\)Linf
and \(K\); these starting values aid the MCMC
sampling by allowing narrower priors. An equivalent approach would be to
use these values as the means of the corresponding prior distributions;
non-linear models generally require more informative priors than linear
models, requiring some amount of trial and error or empirical-Bayes
methods to establish a convergent model.
# non-linear growth functions
nlf.G <- sl~ (152 + Linf) * exp(-exp(-(1.3 + K) * (age - t0)))
nlf.log <- sl ~ (152 + Linf)/(1 + exp(-(1.3 + K) * (age - t0)))
nlf.vB <- sl ~ (152 + Linf) * (1 - exp(-1*((1.3 + K) * (age - t0))))
We write functions to generate the model formulas and priors so that we can easily iterate of the 24 model variants (see manuscript for model details).
library(brms)
# specify brms formula
make_form <- function(fun, K, t, L) {
if(K) {f.K <- K ~ sex} else {f.K <- K ~ 1}
if(t) {f.t <- t0 ~ sex} else {f.t <- t0 ~ 1}
if(L) {f.L <- Linf ~ sex + (1|haul)} else {f.L <- Linf ~ 1 + (1|haul)}
bf(get(paste0("nlf.",fun)), f.K, f.t, f.L, nl = T)
}
# specify priors
make_prior <- function(K, t, L, sig_Linf = 10, sig_sex2 = 0.3){
p <-
prior(normal(0, 1), nlpar = "K") +
prior(student_t(3, 0, 10), nlpar = "Linf", class = "sd") +
prior(normal(0, 1), nlpar = "t0") +
set_prior(paste0("normal(0, ",sig_Linf,")"), nlpar = "Linf")
if(K) p <- p + set_prior(paste0("normal(0, ",sig_sex2,")"), nlpar = "K", coef = "sex2")
if(t) p <- p + set_prior(paste0("normal(0, ",sig_sex2,")"), nlpar = "t0", coef = "sex2")
if(L) p <- p + set_prior(paste0("normal(0, ",sig_sex2,")"), nlpar = "Linf", coef = "sex2")
return(p)
}
The model variants are stored in a grid, characterised by the growth function and fixed-effects structure.
# characterise model set
models_grid <- expand.grid(fun = c("vB","G","log"), K = c(0,1), L = c(0,1), t = c(0,1), stringsAsFactors = F) %>%
mutate(fe = paste0(ifelse(K,"K",""),ifelse(L,"L",""),ifelse(t,"t",""), ifelse(K + L + t, "","0")),
name = paste0(fun,".",fe),
dim = K + L + t)
models_grid
## fun K L t fe name dim
## 1 vB 0 0 0 0 vB.0 0
## 2 G 0 0 0 0 G.0 0
## 3 log 0 0 0 0 log.0 0
## 4 vB 1 0 0 K vB.K 1
## 5 G 1 0 0 K G.K 1
## 6 log 1 0 0 K log.K 1
## 7 vB 0 1 0 L vB.L 1
## 8 G 0 1 0 L G.L 1
## 9 log 0 1 0 L log.L 1
## 10 vB 1 1 0 KL vB.KL 2
## 11 G 1 1 0 KL G.KL 2
## 12 log 1 1 0 KL log.KL 2
## 13 vB 0 0 1 t vB.t 1
## 14 G 0 0 1 t G.t 1
## 15 log 0 0 1 t log.t 1
## 16 vB 1 0 1 Kt vB.Kt 2
## 17 G 1 0 1 Kt G.Kt 2
## 18 log 1 0 1 Kt log.Kt 2
## 19 vB 0 1 1 Lt vB.Lt 2
## 20 G 0 1 1 Lt G.Lt 2
## 21 log 0 1 1 Lt log.Lt 2
## 22 vB 1 1 1 KLt vB.KLt 3
## 23 G 1 1 1 KLt G.KLt 3
## 24 log 1 1 1 KLt log.KLt 3
For example, the model formula and priors for first model,
vB.0
is:
with(models_grid[1,], make_form(fun,K,t,L))
## sl ~ (152 + Linf) * (1 - exp(-1 * ((1.3 + K) * (age - t0))))
## K ~ 1
## t0 ~ 1
## Linf ~ 1 + (1 | haul)
with(models_grid[1,], make_prior(K,t,L))
## prior class coef group resp dpar nlpar lb ub source
## normal(0, 1) b K <NA> <NA> user
## student_t(3, 0, 10) sd Linf <NA> <NA> user
## normal(0, 1) b t0 <NA> <NA> user
## normal(0, 10) b Linf <NA> <NA> user
We fit the models in parallel, allocating each model to a single core for simultaneous evaluation. All of the models fit in about 2-3 minutes
library(future)
seed <- 60869 #sample(1e5,1)
m.fits <- list()
plan(multisession(workers = 24))
for(i in 1:nrow(models_grid)){
print(paste("Fitting model", i, models_grid[i,"name"]))
m.fits[[models_grid[i,"name"]]] <- futureCall(brm, args = list(formula = with(models_grid[i,], make_form(fun,K,t,L)),
prior = with(models_grid[i,], make_prior(K,t,L)),
data = fishData,
future = F,
chains = 4,
seed = seed,
iter = 4000),
seed = T,
earlySignal = T
)
}
For a quick initial check of model convergence, we look at the mean \(\widehat{R}\) values:
m.fits %>% map_dbl(~ .x %>% rhat() %>% map_dbl(mean) %>% mean)
## vB.0 G.0 log.0 vB.K G.K log.K vB.L G.L
## 1.001416 1.001063 1.002009 1.000933 1.001315 1.001547 1.000123 1.001214
## log.L vB.KL G.KL log.KL vB.t G.t log.t vB.Kt
## 1.001267 1.000735 1.002496 1.000527 1.000317 1.000780 1.001470 2.809856
## G.Kt log.Kt vB.Lt G.Lt log.Lt vB.KLt G.KLt log.KLt
## 1.001769 1.002214 1.000928 1.001181 1.002665 2.771070 1.002420 1.002877
There are two models, vB.Kt
and vB.KLt
,
that have \(\widehat{R}>1.1\),
suggesting they have not converged. We refit them with narrow priors to
see if this helps
vB.KLt_update <- m.fits$vB.KLt %>%
update(cores = 4, inits = "0", seed = seed, prior = make_prior(1,1,1,sig_Linf = 5, sig_sex2 = 0.2))
vB.Kt_update <- m.fits$vB.Kt %>%
update(cores = 4, inits = "0", seed = seed, prior = make_prior(1,1,0,sig_Linf = 5, sig_sex2 = 0.2)
vB.KLt_update %>% rhat %>% mean
vB.Kt_update %>% rhat %>% mean
## [1] 1.002192
## [1] 1.00022
The new fits have acceptable \(\widehat{R}\) values so we will proceed with assessment of predictive scores based on approximate cross validation. The diagnostics associated with these assessments can identify further issues with model misspecification, and the selected model should also be subject to validation checks and the visual inspection of chains.
To evaluate model performance from both conditional and marginal perspectives, we implement both pointwise and leave-one-group-out (LOGO) CV strategies.
For the conditional focus, we compute Pareto-smoothed importance
sampling (PSIS) leave-one-out (LOO) approximate cross validation, using
the function loo
provided via the package with the same
name.
# still using 24 cores
m.loo <- m.fits %>% furrr::future_map(loo)
The pareto_k
diagnostic assess the validity of LOO
approximations: all pointwise pareto_k
values should be
less than 0.7.
m.loo %>% map("diagnostics") %>% map("pareto_k") %>% map_dbl(~ sum(.x>0.7))
## vB.0 G.0 log.0 vB.K G.K log.K vB.L G.L log.L vB.KL
## 0 0 0 0 0 0 0 0 0 0
## G.KL log.KL vB.t G.t log.t vB.Kt G.Kt log.Kt vB.Lt G.Lt
## 0 0 0 0 0 0 0 0 0 0
## log.Lt vB.KLt G.KLt log.KLt
## 0 0 0 0
The approximations are valid.
For the marginal focus, we use \(K\)-fold CV, manually leaving out one haul at a time.
plan(multisession(workers = 45)) # one core per haul
m.logo <- lapply(m.fits, kfold, chains = 2, future = T, group = "haul")
Since we are refitting the model for each split of the data, there are no diagnostics to evaluate.
To organise and generate plots of the CV estimates we write a function to compute summary statistics of the LOO estimates and we reorder our model grid by function type and dimension
make_plot_data <-
function(metric_data, levels = names(metric_data)){
best_model <- metric_data %>% map_dbl(mean) %>% which.max() %>% names()
tibble(model = factor(names(metric_data), levels = levels),
metric = metric_data %>% map_dbl(mean),
metric_diff = metric - max(metric),
se = metric_data %>% map_dbl(~ .x %>% {sd(.)/sqrt(length(.))}),
se_diff = metric_data %>% map(~ .x - metric_data[[best_model]]) %>% map_dbl(~ .x %>% {sd(.)/sqrt(length(.))}),
se_mod = sqrt(1 -cor(metric_data)[best_model,])*se[best_model])
}
models_grid <- models_grid %>% arrange(fun,dim)
We extract and organise the PSIS-LOO estimates
loo_pointwise <-
m.loo %>%
map("pointwise") %>%
map_dfc(~ .[,"elpd_loo"]) %>%
relocate(all_of(models_grid$name))
se_type <- "se_mod"
loo_df <-
make_plot_data(loo_pointwise) %>%
mutate(fe = factor(models_grid$fe, levels = models_grid$fe[1:8]),
fun = models_grid$fun,
dim = models_grid$dim,
se_ose = se,
se = .[[se_type]])
loo_ose_models <- loo_df %>% filter(metric + se >= max(metric)) %>% filter(dim == min(dim))
fun_labels = c(G = "Gompertz", log = "logistic", vB = "von Bertalanffy")
and generate the plot:
loo_df %>%
ggplot(aes(x = fe)) +
geom_point(aes(y = metric_diff, col = fun), size = 2, show.legend = F) +
geom_linerange(aes(ymin = metric_diff - se, ymax = metric_diff + se, col = fun), show.legend = F) +
theme_classic() +
theme(strip.placement = "outside", panel.border = element_blank(),
strip.background = element_rect(), axis.ticks.x = element_blank(),
strip.text = element_text(size = 8), strip.background.x = element_rect(linetype = 0, fill = "grey90"),
axis.title.y = element_text(size = 8)) +
facet_wrap(~fun,nrow = 1, strip.position = "bottom", labeller = labeller(fun = fun_labels)) +
geom_point(aes(y = metric_diff), shape = 1, size = 4, col = "black", data = loo_ose_models) +
scale_color_brewer(type = "div", palette = "Dark2")+
scale_y_continuous(labels=function(x)x*1000, limits = c(-0.022,0.002)) +
labs(x = NULL, subtitle = "Pinfish PSIS-LOO estimates", y = expression(Delta*"ELPD "~group("(", 10^- 3,")")))
See the caption and text in the manuscript for interpretation.
We extract and organise the LOGO CV estimates
logo_pointwise <-
m.logo %>%
map("pointwise") %>%
map_dfc(~ .[,"elpd_kfold"]) %>%
relocate(all_of(models_grid$name))
se_type <- "se_mod"
logo_df <- make_plot_data(logo_pointwise) %>%
mutate(fe = factor(models_grid$fe, levels = models_grid$fe[1:8]),
fun = models_grid$fun,
dim = models_grid$dim,
se_ose = se,
se = .[[se_type]])
logo_ose_models <- logo_df %>% filter(metric + se >= max(metric)) %>% filter(dim == min(dim))
fun_labels = c(G = "Gompertz", log = "logistic", vB = "von Bertalanffy")
and generate the plot:
logo_df %>%
filter(metric_diff > -0.012) %>%
ggplot(aes(x = fe)) +
#geom_line(aes(y = metric_diff, group = fun), col = "grey40", lty = "dashed", show.legend = F) +
geom_point(aes(y = metric_diff, col = fun), size = 2, show.legend = F) +
geom_linerange(aes(ymin = metric_diff - se, ymax = metric_diff + se, col = fun), show.legend = F) +
theme_classic() +
theme(strip.placement = "outside", panel.border = element_blank(),
strip.background = element_rect(), axis.ticks.x = element_blank(),
strip.text = element_text(size = 8), strip.background.x = element_rect(linetype = 0, fill = "grey90"),
axis.title.y = element_text(size = 8)) +
facet_wrap(~fun,nrow = 1, strip.position = "bottom", labeller = labeller(fun = fun_labels)) +
geom_point(aes(y = metric_diff), shape = 1, size = 4, col = "black", data = logo_ose_models) +
scale_color_brewer(type = "div", palette = "Dark2")+
scale_y_continuous(labels=function(x)x*1000, limits = c(-0.012,0.002)) +
labs(x = NULL, subtitle = "Pinfish LOGO estimates", y = expression(Delta*hat(S)*" "~group("(", 10^{- 3},")")))
See the caption and text in the manuscript for interpretation.
Load the selected model
m.vB.0 <- m.fits$vB.0
With a little effort we generate plot data for the conditional effects,
# von Bertalanffy growth function
g.vB <- function(age, Linf, K, t0) (152 + Linf) * (1 - exp(-1*((1.3 + K) * (age - t0))))
# list of hauls
hauls <- m.vB.0$data$haul %>% unique
# set range of age values
ce_data <- hauls %>% map_dfr(~ tibble(age = seq(0.5,6.3,0.04), haul = .x))
# extract and tidy posterior draws
ps.0 <-
m.vB.0 %>%
as_tibble %>%
rename_with(~ str_replace(.x, "r_haul__Linf\\[","")) %>%
rename_with(~ str_replace(.x, ",Intercept\\]",""), ends_with("Intercept]")) %>%
rename(K = b_K_Intercept, t0 = b_t0_Intercept, Linf0 = b_Linf_Intercept, tau = sd_haul__Linf_Intercept) %>%
select(-lp__) %>% sample_n(500) %>% mutate(sample = 1:500)
# create long-form data frame of posterior simulated data
ce_plot_data <-
ps.0 %>%
pivot_longer(-any_of(c("sigma","tau","sample","K","Linf0","t0")),
names_to = "haul", values_to = "Linf_haul") %>%
mutate(Linf = Linf0 + Linf_haul) %>%
full_join(ce_data, by = "haul") %>%
group_by(sample) %>%
mutate(sig_err = rnorm(1,0,mean(sigma))) %>% ## generate residual error
mutate(length_e = g.vB(age,Linf,K,t0), # expected length
length = length_e + sig_err) %>%
group_by(haul) %>%
mutate(haul_col = mean(Linf) + 152) # haul colour index for plots
# match haul colours to observed data
obs_data <-
m.vB.0$data %>%
left_join(ce_plot_data %>%
select(haul, haul_col) %>%
distinct,
by = "haul")
ce_plot_data %>%
group_by(haul_col,age) %>%
summarise(length = mean(length_e)) %>%
ggplot(aes(x = age)) +
#geom_ribbon(aes(ymin = l_low, ymax = l_hi), fill = "black", alpha = 0.1, data = ribbon_data) +
geom_line(aes(y = length, group = haul_col, col = haul_col), size = 0.25, alpha = 0.4) +
geom_point(aes(y = sl, group = haul_col, col = haul_col), size = 0.5, alpha = 1, data = obs_data) +
theme_classic() +
theme(legend.position = "none", legend.key.height = unit(5,"mm"))
## `summarise()` has grouped output by 'haul_col'. You can override using the
## `.groups` argument.
Load selected model
m.log.0 <- m.fits$log.0
As above, we extract posterior samples and simulate new data to generate the conditional effects plots,
# logistic growth function
g.log.0 <- function(age, Linf, K, t0) (152 + Linf)/(1 + exp(-(1.3 + K) * (age - t0)))
# set range of age values
ce_data <-tibble(age = seq(0.5,6.3,0.04), J = 1)
# extract and tidy posterior draws
ps.0 <-
m.log.0 %>%
as_tibble %>%
select(K = b_K_Intercept,
t0 = b_t0_Intercept,
Linf0 = b_Linf_Intercept,
tau = sd_haul__Linf_Intercept,
sigma) %>%
sample_n(500) %>%
mutate(sample = 1:500)
# create long-form data frame of posterior simulated data
ce_plot_data <-
full_join(ps.0 %>% mutate(J = 1), ce_data, by = "J") %>%
group_by(sample) %>%
mutate(sig_err = rnorm(1,0,mean(sigma))) %>% ## draw from residual error model
mutate(tau_err = rnorm(1,0,mean(tau))) %>% ## draw from hierarchical distribution for Linf
mutate(length = g.log.0(age,Linf0 + tau_err,K,t0) + sig_err,
length_e = g.log.0(age,Linf0,K,t0)) # expected length
# main plot
ce_plot_data %>%
group_by(age) %>%
summarise(l_low = quantile(length, 0.025),
l_hi = quantile(length, 0.975),
length = mean(length_e)) %>%
ggplot(aes(x = age)) +
geom_ribbon(aes(ymin = l_low, ymax = l_hi), fill = blues9[3], alpha = 0.4) +
geom_line(aes(y = length), size = 0.7, col = blues9[9], lty = "solid") +
geom_point(aes(y = sl), size = 0.4, alpha = 1, data = m.log.0$data) +
#geom_point(aes(y = sl), size = 0.3, alpha = 1, col = "white", data = m.vB.0$data) +
theme_classic() +
theme(legend.position = "none", legend.key.height = unit(10,"mm"))
sessionInfo()
## R version 4.2.1 (2022-06-23)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 20.04.5 LTS
##
## Matrix products: default
## BLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.9.0
## LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.9.0
##
## locale:
## [1] LC_CTYPE=en_AU.UTF-8 LC_NUMERIC=C
## [3] LC_TIME=en_AU.UTF-8 LC_COLLATE=en_AU.UTF-8
## [5] LC_MONETARY=en_AU.UTF-8 LC_MESSAGES=en_AU.UTF-8
## [7] LC_PAPER=en_AU.UTF-8 LC_NAME=C
## [9] LC_ADDRESS=C LC_TELEPHONE=C
## [11] LC_MEASUREMENT=en_AU.UTF-8 LC_IDENTIFICATION=C
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] brms_2.17.0 Rcpp_1.0.9 fishmethods_1.11-3 knitr_1.39
## [5] kableExtra_1.3.4 yardstick_1.0.0 workflowsets_1.0.0 workflows_1.0.0
## [9] tune_1.0.0 rsample_1.1.0 recipes_1.0.1 parsnip_1.0.0
## [13] modeldata_1.0.0 infer_1.0.2 dials_1.0.0 scales_1.2.0
## [17] broom_1.0.0 tidymodels_1.0.0 forcats_0.5.1 stringr_1.4.1
## [21] dplyr_1.0.9 purrr_0.3.4 readr_2.1.2 tidyr_1.2.0
## [25] tibble_3.1.7 ggplot2_3.3.6 tidyverse_1.3.2
##
## loaded via a namespace (and not attached):
## [1] utf8_1.2.2 tidyselect_1.1.2 lme4_1.1-30
## [4] htmlwidgets_1.5.4 grid_4.2.1 munsell_0.5.0
## [7] codetools_0.2-18 DT_0.24 future_1.26.1
## [10] miniUI_0.1.1.1 withr_2.5.0 Brobdingnag_1.2-7
## [13] colorspace_2.0-3 highr_0.9 rstudioapi_0.14
## [16] stats4_4.2.1 bayesplot_1.9.0 listenv_0.8.0
## [19] labeling_0.4.2 rstan_2.21.5 DiceDesign_1.9
## [22] farver_2.1.1 bridgesampling_1.1-2 coda_0.19-4
## [25] parallelly_1.32.1 vctrs_0.4.1 generics_0.1.3
## [28] ipred_0.9-13 xfun_0.32 R6_2.5.1
## [31] markdown_1.1 gamm4_0.2-6 projpred_2.1.2
## [34] lhs_1.1.5 cachem_1.0.6 assertthat_0.2.1
## [37] promises_1.2.0.1 nnet_7.3-17 googlesheets4_1.0.0
## [40] gtable_0.3.0 globals_0.16.0 processx_3.7.0
## [43] timeDate_4021.104 rlang_1.0.4 systemfonts_1.0.4
## [46] splines_4.2.1 gargle_1.2.0 checkmate_2.1.0
## [49] inline_0.3.19 yaml_2.3.5 reshape2_1.4.4
## [52] abind_1.4-5 modelr_0.1.8 threejs_0.3.3
## [55] crosstalk_1.2.0 backports_1.4.1 httpuv_1.6.5
## [58] tensorA_0.36.2 tools_4.2.1 lava_1.6.10
## [61] ellipsis_0.3.2 RColorBrewer_1.1-3 jquerylib_0.1.4
## [64] posterior_1.2.2 ggridges_0.5.3 plyr_1.8.7
## [67] base64enc_0.1-3 prettyunits_1.1.1 ps_1.7.1
## [70] rpart_4.1.16 zoo_1.8-10 haven_2.5.0
## [73] fs_1.5.2 furrr_0.3.1 magrittr_2.0.3
## [76] data.table_1.14.2 colourpicker_1.1.1 reprex_2.0.1
## [79] GPfit_1.0-8 googledrive_2.0.0 mvtnorm_1.1-3
## [82] matrixStats_0.62.0 hms_1.1.1 shinyjs_2.1.0
## [85] mime_0.12 evaluate_0.16 xtable_1.8-4
## [88] shinystan_2.6.0 readxl_1.4.0 gridExtra_2.3
## [91] rstantools_2.2.0 compiler_4.2.1 crayon_1.5.1
## [94] StanHeaders_2.21.0-7 minqa_1.2.4 htmltools_0.5.3
## [97] mgcv_1.8-40 later_1.3.0 tzdb_0.3.0
## [100] RcppParallel_5.1.5 lubridate_1.8.0 DBI_1.1.3
## [103] dbplyr_2.2.1 MASS_7.3-58.1 boot_1.3-28
## [106] Matrix_1.4-1 cli_3.3.0 parallel_4.2.1
## [109] gower_1.0.0 igraph_1.3.3 pkgconfig_2.0.3
## [112] numDeriv_2016.8-1.1 xml2_1.3.3 foreach_1.5.2
## [115] dygraphs_1.1.1.6 svglite_2.1.0 bslib_0.4.0
## [118] hardhat_1.2.0 webshot_0.5.3 prodlim_2019.11.13
## [121] rvest_1.0.2 distributional_0.3.0 callr_3.7.1
## [124] digest_0.6.29 rmarkdown_2.14 cellranger_1.1.0
## [127] shiny_1.7.1 gtools_3.9.3 nloptr_2.0.3
## [130] lifecycle_1.0.1 nlme_3.1-159 jsonlite_1.8.0
## [133] viridisLite_0.4.1 fansi_1.0.3 pillar_1.7.0
## [136] lattice_0.20-45 loo_2.5.1 fastmap_1.1.0
## [139] httr_1.4.3 pkgbuild_1.3.1 survival_3.4-0
## [142] glue_1.6.2 xts_0.12.1 shinythemes_1.2.0
## [145] iterators_1.0.14 class_7.3-20 stringi_1.7.8
## [148] sass_0.4.2 bootstrap_2019.6 future.apply_1.9.0